Test Report

Test Suite: conformance/parallel

Duration6698.0 sec
Test Cases2674
Failures26

Results Index


Test Results


Test Class: no-testclass
_sig-arch__Early__Managed_cluster_should_start_all_core_operators__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-auth__Feature_SCC__Early__should_not_have_pod_creation_failures_during_install__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-cluster-lifecycle__Feature_Machines__Early__Managed_cluster_should_have_same_number_of_Machines_and_Nodes__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 67.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:44.274: INFO: Driver cinder doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:43.900: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__CSI_Ephemeral-volume__default_fs___ephemeral_should_create_read-only_inline_ephemeral_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 147.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:43.282: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.3s

_sig-storage__CSI_mock_volume_CSIServiceAccountToken_token_should_not_be_plumbed_down_when_CSIDriver_is_not_deployed__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 175.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:40.302: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:39.947: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:39.803: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_custom_resource.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:35.381: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:34.968: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local__Pod_with_node_different_from_PV's_NodeAffinity_should_fail_scheduling_due_to_different_NodeAffinity__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 17.5s

_sig-node__Probing_container_should__not__be_restarted_with_a_tcp_8080_liveness_probe__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 266.0s

_sig-storage__PersistentVolumes-local___Volume_type__blockfswithformat__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 12.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 10:09:10.202: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:09:10.389934   59853 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:09:10.390: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: blockfswithformat]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating block device on node "ostest-n5rnf-worker-0-8kq82" using path "/tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073"
Oct 13 10:09:12.458: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073 && dd if=/dev/zero of=/tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 13 10:09:12.659: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 13 10:09:12.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073 && chmod o+rwx /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 10:09:13.110: INFO: Creating a PV followed by a PVC
Oct 13 10:09:13.160: INFO: Waiting for PV local-pvs7blb to bind to PVC pvc-dfjr2
Oct 13 10:09:13.160: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dfjr2] to have phase Bound
Oct 13 10:09:13.177: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound.
Oct 13 10:09:15.184: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound.
Oct 13 10:09:17.242: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound.
Oct 13 10:09:19.249: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound.
Oct 13 10:09:21.256: INFO: PersistentVolumeClaim pvc-dfjr2 found and phase=Bound (8.096047407s)
Oct 13 10:09:21.256: INFO: Waiting up to 3m0s for PersistentVolume local-pvs7blb to have phase Bound
Oct 13 10:09:21.263: INFO: PersistentVolume local-pvs7blb found and phase=Bound (7.132318ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 10:09:21.270: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: blockfswithformat]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 10:09:21.270: INFO: Deleting PersistentVolumeClaim "pvc-dfjr2"
Oct 13 10:09:21.278: INFO: Deleting PersistentVolume "local-pvs7blb"
Oct 13 10:09:21.293: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 13 10:09:21.453: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Tear down block device "/dev/loop0" on node "ostest-n5rnf-worker-0-8kq82" at path /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file
Oct 13 10:09:21.587: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Removing the test directory /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073
Oct 13 10:09:21.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-3610" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:09.665: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:09.277: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:08.815: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:08.381: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:07.947: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:07.560: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:07.173: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:06.774: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:06.421: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:09:06.019: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-node__Pods_should_be_updated__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:55.507: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:55.194: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__blockfswithoutformat__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.2s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.4s

_sig-apps__Deployment_iterative_rollouts_should_eventually_progress__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 74.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:51.887: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:51.848: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:51.535: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:51.156: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-node__PodTemplates_should_delete_a_collection_of_pod_templates__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__CSI_mock_volume_CSI_workload_information_using_mock_driver_should_not_be_passed_when_CSIDriver_does_not_exist__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 189.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:46.585: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:46.235: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:45.859: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__CSI_Ephemeral-volume__default_fs___ephemeral_should_support_multiple_inline_ephemeral_volumes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 177.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:21.685: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver nfs doesn't publish storage capacity -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:21.259: INFO: Driver nfs doesn't publish storage capacity -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver nfs doesn't publish storage capacity -- skipping

Stderr
_sig-network__EndpointSlice_should_create_Endpoints_and_EndpointSlices_for_Pods_matching_a_Service__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 56.5s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:15.657: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__Downward_API_volume_should_set_mode_on_item_file__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.2s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:10.429: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:09.942: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:09.467: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:08.980: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:08.487: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:08.055: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_provide_node_allocatable__cpu__as_default_cpu_limit_if_the_limit_is_not_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.3s

_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_up_with_two_External_metrics_from_Stackdriver__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 10:08:01.548: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:01.177: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:00.809: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:08:00.380: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:59.907: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:59.482: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__EmptyDir_volumes_should_support__non-root,0644,default___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.3s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:56.124: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:55.769: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_be_able_to_deny_pod_and_configmap_creation__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:53.584: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 57.8s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:53.011: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_provide_node_allocatable__memory__as_default_memory_limit_if_the_limit_is_not_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.1s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:52.639: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_fails_if_only_datastore_is_specified_in_the_storage_class__No_shared_datastores_exist_among_all_the_nodes___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 10:07:52.761: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:07:53.054221   56481 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:07:53.054: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 10:07:53.059: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-6933" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:52.340: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:52.128: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:51.756: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:51.416: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-auth__Certificates_API__Privileged_ClusterAdmin__should_support_building_a_client_with_a_CSR__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 11.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.3s

_sig-api-machinery__ResourceQuota_should_verify_ResourceQuota_with_best_effort_scope.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 17.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:34.729: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:34.362: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:33.996: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-node__Kubelet_when_scheduling_a_busybox_command_in_a_pod_should_print_the_output_to_logs__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 26.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:25.253: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-node__Probing_container_should_be_restarted_with_a_local_redirect_http_liveness_probe__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:24.775: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:24.592: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:24.386: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:24.247: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:23.999: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:23.920: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:23.663: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Proxy_server_should_support_--unix-socket=/path___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:241]: driver cinder does not support volume limits
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumelimits
Oct 13 10:07:22.307: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:07:22.544231   55490 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:07:22.544: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that all csinodes have volume limits [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:238
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumelimits-7902" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:241]: driver cinder does not support volume limits

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:21.703: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:21.360: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_perform_rolling_updates_and_roll_backs_of_template_modifications__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 243.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:14.721: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_dynamically_created_pv_with_allowed_zones_specified_in_storage_class,_shows_the_right_zone_information_on_its_labels__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 10:07:14.099: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:07:14.301712   54940 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:07:14.301: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 10:07:14.309: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-95" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:13.506: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_should_support__root,0666,default___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:13.176: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:12.959: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:12.852: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:12.481: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:12.048: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:11.678: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:11.383: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:11.080: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:10.749: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:10.362: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:09.960: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:07:09.580: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_multiple_priority_class_scope__quota_set_to_pod_count__2__against_2_pods_with_same_priority_classes.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 9.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 58.6s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:52.406: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:52.021: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:51.660: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:51.310: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:50.861: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:50.410: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:49.988: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__InitContainer__NodeConformance__should_invoke_init_containers_on_a_RestartNever_pod__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 50.2s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__CSI_Ephemeral-volume__default_fs___ephemeral_should_support_two_pods_which_share_the_same_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 183.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:40.183: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:39.818: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:39.452: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:39.076: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:38.724: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:38.322: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:37.920: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_mutate_pod_and_apply_defaults_after_mutation__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 30.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:32.698: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:32.348: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:31.983: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:31.596: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:31.221: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:30.852: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:30.499: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:30.216: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:29.888: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__updates_the_published_spec_when_one_version_gets_renamed__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 107.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:28.330: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:27.995: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-network__DNS_should_resolve_DNS_of_partial_qualified_names_for_services__LinuxOnly___Conformance___Skipped_Proxy___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 54.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:06:26.267: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__DisruptionController_Listing_PodDisruptionBudgets_for_all_namespaces_should_list_and_delete_a_collection_of_PodDisruptionBudgets__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 7.6s

_sig-node__Lease_lease_API_should_be_available__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__PersistentVolumes-local___Volume_type__blockfswithformat__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.6s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 83.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:59.835: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:59.508: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:59.140: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:58.631: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:58.099: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:57.759: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:57.464: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__Volume_Operations_Storm__Feature_vsphere__should_create_pod_with_many_volumes_and_verify_no_attach_call_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:67]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Operations Storm [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-ops-storm
Oct 13 10:05:56.800: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:05:57.126686   51322 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:05:57.126: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Operations Storm [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:66
Oct 13 10:05:57.131: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Operations Storm [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-ops-storm-2109" for this suite.
[AfterEach] [sig-storage] Volume Operations Storm [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:80
STEP: Deleting PVCs
STEP: Deleting StorageClass
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:67]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:55.394: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__CustomResourceDefinition_resources__Privileged_ClusterAdmin__Simple_CustomResourceDefinition_getting/updating/patching_custom_resource_definition_status_sub-resource_works___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.7s

_sig-node__Docker_Containers_should_be_able_to_override_the_image's_default_command_and_arguments__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 34.9s

_sig-node__Downward_API_should_provide_pod_UID_as_env_vars__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 39.1s

_sig-cli__Kubectl_client_Kubectl_replace_should_update_a_single-container_pod's_image___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.2s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:30.445: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:30.008: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:29.656: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:29.307: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:28.915: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:28.494: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:28.116: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__Services_should_be_able_to_change_the_type_from_ClusterIP_to_ExternalName__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 81.0s

_sig-storage__Volume_Provisioning_On_Clustered_Datastore__Feature_vsphere__verify_dynamic_provision_with_spbm_policy_on_clustered_datastore__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-provision
Oct 13 10:05:25.550: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:05:25.794541   50451 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:05:25.794: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:52
Oct 13 10:05:25.802: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-provision-6754" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:24.743: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_priority_class_scope__quota_set_to_pod_count__1__against_a_pod_with_different_priority_class__ScopeSelectorOpExists_.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 6.9s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:17.478: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:17.148: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:16.781: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:16.419: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:16.131: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-node__Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message__LinuxOnly__if_TerminationMessagePath_is_set_as_non-root_user_and_at_a_non-default_path__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.5s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.9s

_sig-node__Events_should_be_sent_by_kubelets_and_the_scheduler_about_pods_scheduling_and_running___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.6s

_sig-storage__ConfigMap_optional_updates_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 126.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 52.1s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:03.272: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:03.261: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:02.973: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:02.838: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:02.617: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:02.528: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:02.277: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:02.220: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:05:01.946: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 162.0s

_sig-cli__Kubectl_client_Simple_pod_should_support_exec_through_an_HTTP_proxy__Skipped_Proxy___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 53.8s

_sig-apps__Deployment_should_validate_Deployment_Status_endpoints__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 27.0s

_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings_as_non-root__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-storage__PersistentVolumes-local___Volume_type__blockfswithoutformat__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 16.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 10:04:07.271: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:04:07.530687   47341 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:04:07.530: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: blockfswithoutformat]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating block device on node "ostest-n5rnf-worker-0-94fxs" using path "/tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373"
Oct 13 10:04:09.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373 && dd if=/dev/zero of=/tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 13 10:04:09.792: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 10:04:09.930: INFO: Creating a PV followed by a PVC
Oct 13 10:04:09.956: INFO: Waiting for PV local-pvcfbdv to bind to PVC pvc-8xh4r
Oct 13 10:04:09.956: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8xh4r] to have phase Bound
Oct 13 10:04:09.965: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound.
Oct 13 10:04:11.975: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound.
Oct 13 10:04:13.986: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound.
Oct 13 10:04:15.992: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound.
Oct 13 10:04:17.998: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound.
Oct 13 10:04:20.004: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound.
Oct 13 10:04:22.013: INFO: PersistentVolumeClaim pvc-8xh4r found and phase=Bound (12.056430146s)
Oct 13 10:04:22.013: INFO: Waiting up to 3m0s for PersistentVolume local-pvcfbdv to have phase Bound
Oct 13 10:04:22.018: INFO: PersistentVolume local-pvcfbdv found and phase=Bound (4.999665ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 10:04:22.037: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: blockfswithoutformat]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 10:04:22.038: INFO: Deleting PersistentVolumeClaim "pvc-8xh4r"
Oct 13 10:04:22.050: INFO: Deleting PersistentVolume "local-pvcfbdv"
Oct 13 10:04:22.069: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Tear down block device "/dev/loop0" on node "ostest-n5rnf-worker-0-94fxs" at path /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file
Oct 13 10:04:22.261: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Removing the test directory /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373
Oct 13 10:04:22.432: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-4197" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:04:06.615: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:59.653: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:59.273: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.8s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-apps__CronJob_should_schedule_multiple_jobs_concurrently__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 67.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:55.246: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:54.941: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:54.854: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 10:03:54.205: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:03:54.416742   47097 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:03:54.416: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 10:03:54.420: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-8416" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:53.689: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:53.308: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-node___Feature_Example__Liveness_liveness_pods_should_be_automatically_restarted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 98.0s

_sig-storage__PersistentVolumes-local___Volume_type__tmpfs__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 5.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 10:03:50.666: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:03:50.961501   47027 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:03:50.961: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: tmpfs]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating tmpfs mount point on node "ostest-n5rnf-worker-0-j4pkp" at path "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4"
Oct 13 10:03:53.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4" "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4"] Namespace:e2e-persistent-local-volumes-test-6456 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-wxb9f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 10:03:53.221: INFO: Creating a PV followed by a PVC
Oct 13 10:03:53.249: INFO: Waiting for PV local-pvfmb5b to bind to PVC pvc-8g8x4
Oct 13 10:03:53.249: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8g8x4] to have phase Bound
Oct 13 10:03:53.255: INFO: PersistentVolumeClaim pvc-8g8x4 found but phase is Pending instead of Bound.
Oct 13 10:03:55.273: INFO: PersistentVolumeClaim pvc-8g8x4 found and phase=Bound (2.02414413s)
Oct 13 10:03:55.273: INFO: Waiting up to 3m0s for PersistentVolume local-pvfmb5b to have phase Bound
Oct 13 10:03:55.281: INFO: PersistentVolume local-pvfmb5b found and phase=Bound (8.273116ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 10:03:55.297: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: tmpfs]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 10:03:55.298: INFO: Deleting PersistentVolumeClaim "pvc-8g8x4"
Oct 13 10:03:55.315: INFO: Deleting PersistentVolume "local-pvfmb5b"
STEP: Unmount tmpfs mount point on node "ostest-n5rnf-worker-0-j4pkp" at path "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4"
Oct 13 10:03:55.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4"] Namespace:e2e-persistent-local-volumes-test-6456 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-wxb9f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Removing the test directory
Oct 13 10:03:55.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4] Namespace:e2e-persistent-local-volumes-test-6456 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-wxb9f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-6456" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:50.186: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_secret.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 18.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:40.890: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 68.0s

_sig-storage__Volume_Disk_Format__Feature_vsphere__verify_disk_format_type_-_zeroedthick_is_honored_for_dynamically_provisioned_pv_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-disk-format
Oct 13 10:03:40.292: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:03:40.506380   46682 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:03:40.506: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:70
Oct 13 10:03:40.509: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-disk-format-7783" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_not_be_able_to_mutate_or_prevent_deletion_of_webhook_configuration_objects__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 30.9s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 90.0s

_sig-node__Probing_container_should__not__be_restarted_with_a_exec__cat_/tmp/health__liveness_probe__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 268.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:30.619: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:30.276: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_when_FSGroup_is_specified__LinuxOnly___NodeFeature_FSGroup__volume_on_tmpfs_should_have_the_correct_mode_using_FSGroup__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 23.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:26.924: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__EmptyDir_wrapper_volumes_should_not_conflict__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 26.9s

_sig-node__Docker_Containers_should_be_able_to_override_the_image's_default_arguments__docker_cmd___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:06.091: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:05.750: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:03:05.363: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.2s

_sig-storage__PersistentVolumes-local___Volume_type__blockfswithformat__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 30.9s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:52.372: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:52.057: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:51.737: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:51.413: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 147.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:48.919: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:48.591: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Volume_Provisioning_on_Datastore__Feature_vsphere__verify_dynamically_provisioned_pv_using_storageclass_fails_on_an_invalid_datastore__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_datastore.go:61]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Provisioning on Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-datastore
Oct 13 10:02:47.784: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:02:48.067441   44078 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:02:48.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning on Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_datastore.go:60
Oct 13 10:02:48.075: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Provisioning on Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-datastore-3006" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_datastore.go:61]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:47.187: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__Garbage_collector_should_support_cascading_deletion_of_custom_resources__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 19.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:45.970: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-node__Docker_Containers_should_be_able_to_override_the_image's_default_command__docker_entrypoint___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:22.076: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Secrets_should_be_consumable_in_multiple_volumes_in_a_pod__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:17.693: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-node__NodeLease_when_the_NodeLease_feature_is_enabled_the_kubelet_should_report_node_status_infrequently__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 81.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:12.610: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:12.278: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:11.945: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:11.637: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__Projected_configMap_updates_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 106.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:07.339: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:06.987: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:06.627: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:06.272: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:05.922: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-node__Security_Context_When_creating_a_pod_with_readOnlyRootFilesystem_should_run_the_container_with_readonly_rootfs_when_readOnlyRootFilesystem=true__LinuxOnly___NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:04.530: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:02:04.177: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-node__NodeLease_when_the_NodeLease_feature_is_enabled_the_kubelet_should_create_and_update_a_lease_in_the_kube-node-lease_namespace__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 67.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:48.800: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:48.284: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:47.841: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:47.419: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 130.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:29.795: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:29.423: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 58.1s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.3s

_sig-storage__PersistentVolumes-local___Volume_type__blockfswithformat__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.0s

_sig-auth__ServiceAccounts_should_mount_projected_service_account_token__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.4s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:19.890: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_the_allowed_zones_and_storage_policy_specified_in_storage_class__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 10:01:19.729: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 10:01:19.934220   40849 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:01:19.934: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 10:01:19.940: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-3265" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:19.538: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 53.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:11.640: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:11.179: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:10.766: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping

Stderr
_sig-apps__DisruptionController_evictions__enough_pods,_absolute_=>_should_allow_an_eviction__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.1s

_sig-apps__CronJob_should_replace_jobs_when_ReplaceConcurrent__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 117.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:03.844: INFO: Driver hostPath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:03.549: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:03.414: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:03.117: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:02.676: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:02.266: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:01:01.569: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_node_allocatable__memory__as_default_memory_limit_if_the_limit_is_not_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 35.2s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.4s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 176.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:33.879: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_0.0.0.0_that_expects_a_client_request_should_support_a_client_that_connects,_sends_DATA,_and_disconnects__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 57.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:26.164: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-apps__Deployment_deployment_should_delete_old_replica_sets__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 45.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:24.500: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_copy_should_copy_a_file_from_a_running_Pod__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 57.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:23.531: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.0s

_sig-node__Pods_should_be_submitted_and_removed__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 43.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.2s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:07.636: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:07.318: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:06.917: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:06.486: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:06.105: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 10:00:05.739: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_localhost_should_support_forwarding_over_websockets__Skipped_Proxy___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.1s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:52.634: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-cli__Kubectl_client_Kubectl_client-side_validation_should_create/apply_a_valid_CR_with_arbitrary-extra_properties_for_CRD_with_partially-specified_validation_schema__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 17.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:48.419: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:47.914: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:47.560: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__EmptyDir_volumes_should_support__non-root,0777,tmpfs___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 37.2s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:46.987: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:46.511: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__GKE_local_SSD__Feature_GKELocalSSD__should_write_and_read_from_node_local_SSD__Feature_GKELocalSSD___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/gke_local_ssd.go:38]: Only supported for providers [gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] GKE local SSD [Feature:GKELocalSSD]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename localssd
Oct 13 09:59:46.867: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:59:47.118750   36553 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:59:47.118: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] GKE local SSD [Feature:GKELocalSSD]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/gke_local_ssd.go:37
Oct 13 09:59:47.130: INFO: Only supported for providers [gke] (not openstack)
[AfterEach] [sig-storage] GKE local SSD [Feature:GKELocalSSD]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-localssd-9872" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/gke_local_ssd.go:38]: Only supported for providers [gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_Snapshot__delete_policy___snapshottable_Feature_VolumeSnapshotDataSource__volume_snapshot_controller__should_check_snapshot_fields,_check_restore_correctly_works_after_modifying_source_data,_check_deletion__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 222.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:32.023: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:31.619: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:31.233: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:30.763: INFO: Driver cinder doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:30.367: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:30.014: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 55.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:26.668: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__DNS_should_provide_DNS_for_ExternalName_services__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 113.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:26.213: INFO: Driver "cinder" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:25.873: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:25.713: INFO: Driver "nfs" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__blockfswithoutformat__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 36.3s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:15.938: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Kubelet_when_scheduling_a_read_only_busybox_container_should_not_write_to_root_filesystem__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 75.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:08.208: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:07.725: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:07.307: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:06.774: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:06.354: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Subpath_Atomic_writer_volumes_should_support_subpaths_with_secret_pod__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:05.993: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:05.888: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:59:05.483: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_apply_apply_set/view_last-applied__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.9s

_sig-apps__ReplicaSet_should_serve_a_basic_image_on_each_replica_with_a_public_image___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.1s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:56.769: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__Garbage_collector_should_delete_RS_created_by_deployment_when_not_orphaning__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.5s

_sig-api-machinery__ServerSideApply_should_give_up_ownership_of_a_field_if_forced_applied_by_a_controller__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:55.439: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:55.034: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 91.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:54.665: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_when_FSGroup_is_specified__LinuxOnly___NodeFeature_FSGroup__volume_on_default_medium_should_have_the_correct_mode_using_FSGroup__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:54.257: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.4s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:33.160: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__Secrets_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_Mode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.3s

_sig-api-machinery__Server_request_timeout_should_return_HTTP_status_code_400_if_the_user_specifies_an_invalid_timeout_in_the_request_URL__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:32.094: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:31.879: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_an_if_a_SPBM_policy_and_VSAN_capabilities_cannot_be_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:58:31.330: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:58:31.573324   33702 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:58:31.573: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:58:31.578: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-3120" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Probing_container_should_override_timeoutGracePeriodSeconds_when_StartupProbe_field_is_set__Feature_ProbeTerminationGracePeriod___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 59.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:30.686: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:30.242: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__Generated_clientset_should_create_v1_cronJobs,_delete_cronJobs,_watch_cronJobs__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:29.818: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-instrumentation__Events_should_ensure_that_an_event_can_be_fetched,_patched,_deleted,_and_listed__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:28.307: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_cluster-info_dump_should_check_if_cluster-info_dump_succeeds__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.2s

_sig-cli__Kubectl_client_Simple_pod_should_support_exec__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:18.403: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__CronJob_should_not_emit_unexpected_warnings__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 137.0s

_sig-storage__PersistentVolumes-local___Volume_type__block__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 56.4s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:08.686: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver csi-hostpath doesn't publish storage capacity -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:08.392: INFO: Driver csi-hostpath doesn't publish storage capacity -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver csi-hostpath doesn't publish storage capacity -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 30.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:01.035: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:58:00.701: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext4 -- skipping

Stderr
_sig-apps__ReplicaSet_should_validate_Replicaset_Status_endpoints__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 30.0s

_sig-storage__PersistentVolumes_GCEPD_should_test_that_deleting_a_PVC_before_the_pod_does_not_cause_pod_deletion_to_fail_on_PD_detach__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pv
Oct 13 09:57:59.953: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:58:00.238840   32420 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:58:00.238: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:77
Oct 13 09:58:00.243: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pv-4725" for this suite.
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:111
Oct 13 09:58:00.284: INFO: AfterEach: Cleaning up test resources
Oct 13 09:58:00.284: INFO: pvc is nil
Oct 13 09:58:00.284: INFO: pv is nil
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:59.458: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_poststart_http_hook_properly__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 65.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:49.322: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:48.976: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__Secrets_should_be_immutable_if_`immutable`_field_is_set__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:47.714: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:47.369: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:46.935: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__Job_should_run_a_job_to_completion_when_tasks_succeed__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:44.969: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-auth___Feature_NodeAuthorizer__Getting_a_non-existent_configmap_should_exit_with_the_Forbidden_error,_not_a_NotFound_error__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:43.795: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:43.434: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:42.966: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:42.611: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:42.190: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:41.713: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:41.277: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:40.820: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:40.388: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-node__Pods_should_allow_activeDeadlineSeconds_to_be_updated__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 35.5s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:32.540: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-auth__ServiceAccounts_should_run_through_the_lifecycle_of_a_ServiceAccount__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:31.208: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.2s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:26.670: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:26.301: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__ReplicaSet_Replace_and_Patch_tests__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 41.2s

_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_fails_if_no_zones_are_specified_in_the_storage_class__No_shared_datastores_exist_among_all_the_nodes___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:57:17.794: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:57:18.036147   30361 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:57:18.036: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:57:18.042: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-9800" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:17.242: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_down_with_External_Metric_with_target_average_value_from_Stackdriver__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 09:57:16.859: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:16.544: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.9s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:07.067: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:06.740: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:06.357: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:06.025: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 110.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:04.299: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:03.990: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:03.674: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_an_invalid_VSAN_capability_along_with_a_compatible_zone_combination_specified_in_storage_class_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:57:03.070: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:57:03.298752   29519 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:57:03.298: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:57:03.302: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-139" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 09:57:02.177: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:57:02.411711   29503 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:57:02.411: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 09:57:02.415: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-6752" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:01.618: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:57:01.270: INFO: Driver emptydir doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ntfs -- skipping

Stderr
_sig-node__Security_Context_should_support_seccomp_unconfined_on_the_container__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 28.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:57.036: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:56.644: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:56.280: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:55.857: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_provide_podname_as_non-root_with_fsgroup__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.2s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:55.479: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:55.062: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:54.758: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:54.419: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__PodTemplates_should_run_the_lifecycle_of_PodTemplates__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:53.189: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-api-machinery__ServerSideApply_should_remove_a_field_if_it_is_owned_but_removed_in_the_apply_request__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:51.935: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:51.544: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:51.138: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:50.690: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__SCTP__Feature_SCTP___LinuxOnly__should_create_a_ClusterIP_Service_with_SCTP_ports__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3335]: Couldn't detect KubeProxy mode - skip, error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: Command stdout: stderr: + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode command terminated with exit code 7 error: exit status 7
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] SCTP [Feature:SCTP] [LinuxOnly]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename sctp
Oct 13 09:56:47.522: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:56:47.726486   28933 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:56:47.726: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] SCTP [Feature:SCTP] [LinuxOnly]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3220
[It] should create a ClusterIP Service with SCTP ports [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3332
STEP: checking that kube-proxy is in iptables mode
Oct 13 09:56:47.785: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 13 09:56:49.801: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 13 09:56:49.812: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 13 09:56:50.197: INFO: rc: 7
Oct 13 09:56:50.225: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 13 09:56:50.236: INFO: Pod kube-proxy-mode-detector no longer exists
Oct 13 09:56:50.236: INFO: Couldn't detect KubeProxy mode - skip, error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
[AfterEach] [sig-network] SCTP [Feature:SCTP] [LinuxOnly]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-sctp-1997" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3335]: Couldn't detect KubeProxy mode - skip, error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 45.1s

_sig-storage__PersistentVolumes-local___Volume_type__dir-link__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 5.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 09:56:40.924: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:56:41.165085   28899 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:56:41.165: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: dir-link]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Oct 13 09:56:43.252: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918-backend && ln -s /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918-backend /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918] Namespace:e2e-persistent-local-volumes-test-9355 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-h5cpn ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 09:56:43.395: INFO: Creating a PV followed by a PVC
Oct 13 09:56:43.418: INFO: Waiting for PV local-pvwt8fr to bind to PVC pvc-9sxvv
Oct 13 09:56:43.418: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9sxvv] to have phase Bound
Oct 13 09:56:43.423: INFO: PersistentVolumeClaim pvc-9sxvv found but phase is Pending instead of Bound.
Oct 13 09:56:45.432: INFO: PersistentVolumeClaim pvc-9sxvv found and phase=Bound (2.014087942s)
Oct 13 09:56:45.433: INFO: Waiting up to 3m0s for PersistentVolume local-pvwt8fr to have phase Bound
Oct 13 09:56:45.436: INFO: PersistentVolume local-pvwt8fr found and phase=Bound (3.829255ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 09:56:45.450: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: dir-link]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 09:56:45.450: INFO: Deleting PersistentVolumeClaim "pvc-9sxvv"
Oct 13 09:56:45.463: INFO: Deleting PersistentVolume "local-pvwt8fr"
STEP: Removing the test directory
Oct 13 09:56:45.489: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918 && rm -r /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918-backend] Namespace:e2e-persistent-local-volumes-test-9355 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-h5cpn ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-9355" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_fails_if_the_availability_zone_specified_in_the_storage_class_have_no_shared_datastores_under_it.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:56:40.119: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:56:40.323221   28884 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:56:40.323: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:56:40.326: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-5629" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:39.407: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:38.990: INFO: Driver nfs doesn't support Block -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping

Stderr
_sig-storage__Flexvolumes_should_be_mountable_when_non-attachable__Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:170]: Only supported for providers [gce local] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Flexvolumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename flexvolume
Oct 13 09:56:38.185: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:56:38.556899   28844 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:56:38.557: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Flexvolumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:169
Oct 13 09:56:38.577: INFO: Only supported for providers [gce local] (not openstack)
[AfterEach] [sig-storage] Flexvolumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-flexvolume-6438" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:170]: Only supported for providers [gce local] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:37.553: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:37.120: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:36.722: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:36.312: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__removes_definition_from_spec_when_one_version_gets_changed_to_not_be_served__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 101.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:35.794: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 115.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:32.104: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-node__PrivilegedPod__NodeConformance__should_enable_privileged_commands__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.4s

_sig-storage__PersistentVolumes-local___Volume_type__blockfswithoutformat__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 34.3s

_sig-node__Probing_container_should_have_monotonically_increasing_restart_count__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 166.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:16.224: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Variable_Expansion_should_allow_substituting_values_in_a_container's_command__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.1s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:10.633: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:10.330: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:10.033: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:09.707: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-node__Probing_container_should_be_restarted_by_liveness_probe_after_startup_probe_enables_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 93.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:56:06.660: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-bindmounted__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 34.1s

_sig-storage__Projected_downwardAPI_should_provide_podname_as_non-root_with_fsgroup__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.1s

_sig-storage__EmptyDir_volumes_when_FSGroup_is_specified__LinuxOnly___NodeFeature_FSGroup__new_files_should_be_created_with_FSGroup_ownership_when_container_is_non-root__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.1s

_sig-node__Probing_container_should_be_restarted_with_an_exec_liveness_probe_with_timeout__MinimumKubeletVersion_1.20___NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 75.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:31.944: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:31.616: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:31.279: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:30.945: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:25.293: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:24.953: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_CRD_without_validation_schema__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 74.0s

_sig-node__Security_Context_should_support_container.SecurityContext.RunAsUser_And_container.SecurityContext.RunAsGroup__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:14.862: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:14.544: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__AppArmor_load_AppArmor_profiles_can_disable_an_AppArmor_profile,_using_unconfined__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/framework/skipper/skipper.go:291]: Only supported for node OS distro [gci ubuntu] (not custom)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-node] AppArmor
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename apparmor
Oct 13 09:55:13.856: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:55:14.180820   25532 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:55:14.180: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] load AppArmor profiles
  k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:32
Oct 13 09:55:14.200: INFO: Only supported for node OS distro [gci ubuntu] (not custom)
[AfterEach] load AppArmor profiles
  k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:36
[AfterEach] [sig-node] AppArmor
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-apparmor-5501" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/framework/skipper/skipper.go:291]: Only supported for node OS distro [gci ubuntu] (not custom)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:13.228: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:12.896: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:12.577: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:12.189: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:11.808: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:11.488: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:11.156: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Probing_container_with_readiness_probe_that_fails_should_never_be_ready_and_never_restart__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 61.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:08.469: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_invalid_hostFailuresToTolerate_value_is_not_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:55:07.757: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:55:08.054236   25271 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:55:08.054: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:55:08.059: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-5053" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:07.145: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:06.731: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:06.334: INFO: Driver "cinder" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:05.923: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:05.565: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:05.239: INFO: Driver emptydir doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:55:04.800: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-node__Secrets_should_be_consumable_via_the_environment__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.2s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:53.445: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 79.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:47.158: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-cli__Kubectl_client_Kubectl_logs_should_be_able_to_retrieve_and_filter_logs___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 42.5s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:43.596: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:43.158: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:42.754: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:42.365: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Garbage_collector_should_not_be_blocked_by_dependency_circle__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 6.1s

_sig-network__SCTP__Feature_SCTP___LinuxOnly__should_create_a_Pod_with_SCTP_HostPort__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.6s

_sig-network__HostPort_validates_that_there_is_no_conflict_between_pods_with_same_hostPort_but_different_hostIP_and_protocol__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 65.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 45.7s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:25.349: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:24.992: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Aggregator_Should_be_able_to_support_the_1.17_Sample_API_Server_using_the_current_Aggregator__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:24.512: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-instrumentation__Events_API_should_ensure_that_an_event_can_be_fetched,_patched,_deleted,_and_listed__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:24.146: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:23.967: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:23.626: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:23.271: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:22.895: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:22.607: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:22.307: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:22.037: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:21.738: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:21.431: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:54:21.146: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-network__DNS_should_provide_DNS_for_pods_for_Subdomain__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 36.2s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 192.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:49.065: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:48.706: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:48.279: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:47.868: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:47.427: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:47.050: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:46.709: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:33.822: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSI_online_volume_expansion_should_expand_volume_without_restarting_pod_if_attach=off,_nodeExpansion=on__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 176.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:33.413: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:33.070: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:32.960: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:32.552: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_include_webhook_resources_in_discovery_documents__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 48.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:32.022: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-auth__ServiceAccounts_should_guarantee_kube-root-ca.crt_exist_in_any_namespace__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "hostPathSymlink" does not support exec - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume
Oct 13 09:53:25.640: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:53:25.872810   21053 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:53:25.872: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:196
Oct 13 09:53:25.878: INFO: Driver "hostPathSymlink" does not support exec - skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-1319" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "hostPathSymlink" does not support exec - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:24.921: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:24.482: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 67.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:16.624: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_container's_memory_limit__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:14.987: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:14.619: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 163.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:14.251: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:13.966: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:13.857: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:13.562: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-api-machinery__CustomResourceDefinition_resources__Privileged_ClusterAdmin__Simple_CustomResourceDefinition_creating/deleting_custom_resource_definition_objects_works___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.9s

_sig-api-machinery__Generated_clientset_should_create_pods,_set_the_deletionTimestamp_and_deletionGracePeriodSeconds_of_the_pod__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 27.1s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:05.061: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:04.674: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:04.328: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__Servers_with_support_for_Table_transformation_should_return_a_406_for_a_backend_which_does_not_implement_metadata__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:03.126: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:02.733: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:02.408: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-network__DNS_should_support_configurable_pod_DNS_nameservers__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.3s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:53:01.407: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.9s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:36.135: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Security_Context_should_support_container.SecurityContext.RunAsUser__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_if_a_SPBM_policy_is_not_honored_on_a_non-compatible_datastore_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:52:30.628: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:52:30.887790   18638 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:52:30.887: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:52:30.893: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-3559" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 145.0s

_sig-storage__Ephemeralstorage_When_pod_refers_to_non-existent_ephemeral_storage_should_allow_deletion_of_pod_with_invalid_volume___projected__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 127.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:21.841: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_compatible_policy_and_datastore_without_any_zones_specified_in_the_storage_class_fails__No_shared_datastores_exist_among_all_the_nodes___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:52:21.758: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:52:21.975020   18566 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:52:21.975: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:52:21.982: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-7184" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:21.509: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:21.160: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:20.845: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:20.514: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:20.176: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:19.851: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:19.471: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:19.095: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 53.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:18.129: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__block__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 5.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:263]: We don't set fsGroup on block device, skipped.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 09:52:15.934: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:52:16.145302   18284 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:52:16.145: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: block]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating block device on node "ostest-n5rnf-worker-0-8kq82" using path "/tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe"
Oct 13 09:52:18.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe && dd if=/dev/zero of=/tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 13 09:52:18.421: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 09:52:18.592: INFO: Creating a PV followed by a PVC
Oct 13 09:52:18.642: INFO: Waiting for PV local-pvjnn2c to bind to PVC pvc-fg46f
Oct 13 09:52:18.642: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fg46f] to have phase Bound
Oct 13 09:52:18.678: INFO: PersistentVolumeClaim pvc-fg46f found but phase is Pending instead of Bound.
Oct 13 09:52:20.683: INFO: PersistentVolumeClaim pvc-fg46f found and phase=Bound (2.040523355s)
Oct 13 09:52:20.683: INFO: Waiting up to 3m0s for PersistentVolume local-pvjnn2c to have phase Bound
Oct 13 09:52:20.686: INFO: PersistentVolume local-pvjnn2c found and phase=Bound (3.621209ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
Oct 13 09:52:20.694: INFO: We don't set fsGroup on block device, skipped.
[AfterEach] [Volume type: block]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 09:52:20.694: INFO: Deleting PersistentVolumeClaim "pvc-fg46f"
Oct 13 09:52:20.705: INFO: Deleting PersistentVolume "local-pvjnn2c"
Oct 13 09:52:20.729: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Tear down block device "/dev/loop0" on node "ostest-n5rnf-worker-0-8kq82" at path /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file
Oct 13 09:52:20.869: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Removing the test directory /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe
Oct 13 09:52:21.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-1836" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:263]: We don't set fsGroup on block device, skipped.

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:15.492: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:15.157: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:14.848: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_localhost_that_expects_a_client_request_should_support_a_client_that_connects,_sends_DATA,_and_disconnects__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:08.753: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:08.423: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:08.149: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-apps__ReplicaSet_should_serve_a_basic_image_on_each_replica_with_a_private_image__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/replica_set.go:115]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-apps] ReplicaSet
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename replicaset
Oct 13 09:52:07.500: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:52:07.750848   17854 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:52:07.750: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a private image [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/replica_set.go:113
Oct 13 09:52:07.757: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-apps] ReplicaSet
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-replicaset-1554" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/replica_set.go:115]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_validate_Statefulset_Status_endpoints__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-storage__PersistentVolumes__Feature_vsphere__Feature_ReclaimPolicy__persistentvolumereclaim_vsphere__Feature_vsphere__should_not_detach_and_unmount_PV_when_associated_pvc_with_delete_as_reclaimPolicy_is_deleted_when_it_is_in_use_by_the_pod__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistentvolumereclaim
Oct 13 09:52:04.854: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:52:05.105156   17828 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:52:05.105: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:47
[BeforeEach] persistentvolumereclaim:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:54
Oct 13 09:52:05.111: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] persistentvolumereclaim:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:63
STEP: running testCleanupVSpherePersistentVolumeReclaim
[AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistentvolumereclaim-6513" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:52:04.294: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Downward_API_volume_should_provide_podname_as_non-root_with_fsgroup_and_defaultMode__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.0s

_sig-apps__Job_should_delete_a_job__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 89.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:44.708: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:44.321: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:43.881: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Security_Context_When_creating_a_container_with_runAsNonRoot_should_not_run_without_a_specified_user_ID__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.8s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:26.965: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:26.599: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:26.211: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_mode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:16.746: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:16.423: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:16.123: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Projected_downwardAPI_should_update_labels_on_modification__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.5s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:15.472: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_allowed_zones_specified_in_storage_class___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:51:15.523: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:51:15.738224   15766 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:51:15.738: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:51:15.741: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-6943" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:14.983: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_valid_diskStripes_and_objectSpaceReservation_values_is_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:51:14.799: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:51:15.085335   15731 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:51:15.085: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:51:15.088: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-1513" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 38.8s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:14.294: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_fails_if_only_storage_policy_is_specified_in_the_storage_class__No_shared_datastores_exist_among_all_the_nodes___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:51:14.408: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:51:14.627518   15703 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:51:14.627: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:51:14.634: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-4899" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:13.952: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__Watchers_should_observe_add,_update,_and_delete_watch_notifications_on_configmaps__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 61.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:13.798: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:13.607: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:13.526: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:13.242: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:13.126: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:199]: Not enough topologies in cluster -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename topology
Oct 13 09:51:13.225: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:51:13.405591   15605 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:51:13.405: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:192
Oct 13 09:51:13.421: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:nova]
Oct 13 09:51:13.421: INFO: In-tree plugin kubernetes.io/cinder is not migrated, not validating any metrics
Oct 13 09:51:13.421: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-topology-25" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:199]: Not enough topologies in cluster -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:12.857: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:12.449: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-node__ConfigMap_should_fail_to_create_ConfigMap_with_empty_key__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:12.019: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:11.579: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_cluster-info_should_check_if_Kubernetes_control_plane_services_is_included_in_cluster-info___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.2s

_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_localhost_that_expects_NO_client_request_should_support_a_client_that_connects,_sends_DATA,_and_disconnects__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 89.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:06.605: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:06.239: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___late-binding___ephemeral_should_support_multiple_inline_ephemeral_volumes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 199.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:51:05.858: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_provide_container's_cpu_limit__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 45.2s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "nfs" does not support cloning - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:50:58.028: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:50:58.226298   14720 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:50:58.226: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:239
Oct 13 09:50:58.233: INFO: Driver "nfs" does not support cloning - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-1802" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "nfs" does not support cloning - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:57.374: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_pod_should_support_memory_backed_volumes_of_specified_size__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 29.2s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.7s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:27.430: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:26.987: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:26.617: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:26.226: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:24.028: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 58.3s

_sig-storage__EmptyDir_volumes_should_support__root,0666,tmpfs___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.4s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:12.129: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:11.696: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:11.271: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:10.835: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-api-machinery__Discovery_should_validate_PreferredVersion_for_each_APIGroup__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.8s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:08.658: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Volume_Placement__Feature_vsphere__should_create_and_delete_pod_with_multiple_volumes_from_different_datastore__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-placement
Oct 13 09:50:07.987: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:50:08.188434   12893 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:50:08.188: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55
Oct 13 09:50:08.197: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-placement-6470" for this suite.
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__ResourceQuota_should_verify_ResourceQuota_with_terminating_scopes.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 17.3s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:06.351: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:05.952: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_poststart_exec_hook_properly__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:50:01.377: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:59.544: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_mutate_custom_resource_with_different_stored_version__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 58.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:27.665: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:27.222: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:26.878: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 216.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:25.146: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-apps__DisruptionController_evictions__maxUnavailable_allow_single_eviction,_percentage_=>_should_allow_an_eviction__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 95.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:21.615: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:21.284: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:20.940: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:20.610: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:20.285: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:19.967: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:19.641: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 74.0s

_sig-node__Container_Runtime_blackbox_test_when_running_a_container_with_a_new_image_should_not_be_able_to_pull_from_private_registry_without_secret__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.4s

_sig-cli__Kubectl_client_Kubectl_apply_should_reuse_port_when_apply_to_an_existing_SVC__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:16.806: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_node_allocatable__cpu__as_default_cpu_limit_if_the_limit_is_not_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.3s

_sig-cli__Kubectl_client_Kubectl_apply_should_apply_a_new_configuration_to_an_existing_RC__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 4.4s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:12.106: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:11.762: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:11.437: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_update_labels_on_modification__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.4s

_sig-network__Networking_should_provide_Internet_connection_for_containers__Feature_Networking-IPv4___Skipped_Disconnected___Skipped_azure___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 59.3s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/networking.go:85]: Unexpected error:
    <*errors.errorString | 0xc002a0ba70>: {
        s: "pod \"connectivity-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "connectivity-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] Networking
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename nettest
Oct 13 09:49:08.484: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:49:08.700481   10397 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:49:08.700: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide Internet connection for containers [Feature:Networking-IPv4] [Skipped:Disconnected] [Skipped:azure] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/networking.go:83
STEP: Running container which tries to connect to 8.8.8.8
Oct 13 09:49:08.742: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "e2e-nettest-7251" to be "Succeeded or Failed"
Oct 13 09:49:08.748: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.277044ms
Oct 13 09:49:10.752: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01012135s
Oct 13 09:49:12.764: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021428112s
Oct 13 09:49:14.778: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035815163s
Oct 13 09:49:16.786: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043217684s
Oct 13 09:49:18.809: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066816996s
Oct 13 09:49:20.820: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077917742s
Oct 13 09:49:22.856: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11393818s
Oct 13 09:49:24.868: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125807143s
Oct 13 09:49:26.877: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.135003283s
Oct 13 09:49:28.899: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.156629149s
Oct 13 09:49:30.908: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.166013815s
Oct 13 09:49:32.913: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.17085184s
Oct 13 09:49:34.929: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 26.186318883s
Oct 13 09:49:36.935: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 28.192212983s
Oct 13 09:49:38.940: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 30.197728174s
Oct 13 09:49:40.947: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 32.204458377s
Oct 13 09:49:42.952: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 34.209981506s
Oct 13 09:49:44.958: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 36.2154586s
Oct 13 09:49:46.962: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 38.219572766s
Oct 13 09:49:48.978: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 40.235607207s
Oct 13 09:49:50.992: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 42.250059674s
Oct 13 09:49:53.004: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 44.261910632s
Oct 13 09:49:55.023: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 46.280628459s
Oct 13 09:49:57.032: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 48.289490479s
Oct 13 09:49:59.044: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 50.301753338s
Oct 13 09:50:01.050: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 52.307451321s
Oct 13 09:50:03.056: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 54.314059414s
Oct 13 09:50:05.068: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=false. Elapsed: 56.325438886s
Oct 13 09:50:07.087: INFO: Pod "connectivity-test": Phase="Failed", Reason="", readiness=false. Elapsed: 58.34496563s
Oct 13 09:50:07.140: INFO: pod e2e-nettest-7251/connectivity-test logs:
nc: connect to 8.8.8.8 port 53 (tcp) timed out: Operation in progress

[AfterEach] [sig-network] Networking
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "e2e-nettest-7251".
STEP: Found 5 events.
Oct 13 09:50:07.147: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for connectivity-test: { } Scheduled: Successfully assigned e2e-nettest-7251/connectivity-test to ostest-n5rnf-worker-0-94fxs
Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {multus } AddedInterface: Add eth0 [10.128.174.211/23] from kuryr
Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container agnhost-container
Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container agnhost-container
Oct 13 09:50:07.152: INFO: POD                NODE                         PHASE   GRACE  CONDITIONS
Oct 13 09:50:07.152: INFO: connectivity-test  ostest-n5rnf-worker-0-94fxs  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:49:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:50:04 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:50:04 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:49:08 +0000 UTC  }]
Oct 13 09:50:07.152: INFO: 
Oct 13 09:50:07.173: INFO: skipping dumping cluster info - cluster too large
STEP: Destroying namespace "e2e-nettest-7251" for this suite.
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/networking.go:85]: Unexpected error:
    <*errors.errorString | 0xc002a0ba70>: {
        s: "pod \"connectivity-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "connectivity-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Stderr
_sig-storage__Projected_combined_should_project_all_components_that_make_up_the_projection_API__Projection__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 23.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:49:01.834: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-auth__Certificates_API__Privileged_ClusterAdmin__should_support_CSR_API_operations__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.8s

_sig-node__Sysctls__LinuxOnly___NodeConformance__should_support_sysctls__MinimumKubeletVersion_1.21___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 26.8s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:50.750: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:50.429: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:50.062: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:49.713: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext4 -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_on_a_non-Workspace_zone_and_attached_to_a_dynamically_created_PV,_based_on_the_allowed_zones_and_storage_policy_specified_in_storage_class__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:48:49.080: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:48:49.309187    9878 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:48:49.309: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:48:49.313: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-3734" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_should_support__root,0644,default___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:48.492: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:48.087: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:47.706: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_should_support__root,0644,tmpfs___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 21.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:46.943: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-node__Pods_Extended_Pod_Container_lifecycle_should_not_create_extra_sandbox_if_all_containers_are_done__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 29.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:30.698: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:30.364: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.3s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:27.734: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 59.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:48:27.772: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:48:27.948574    8687 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:48:27.948: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:395
Oct 13 09:48:27.957: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 13 09:48:27.992: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038" to be "Succeeded or Failed"
Oct 13 09:48:27.999: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 7.065236ms
Oct 13 09:48:30.014: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021224674s
Oct 13 09:48:32.021: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028396261s
Oct 13 09:48:34.034: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04177016s
Oct 13 09:48:36.041: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048702619s
Oct 13 09:48:38.047: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054787811s
Oct 13 09:48:40.055: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063010201s
Oct 13 09:48:42.060: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068043533s
Oct 13 09:48:44.064: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 16.071757724s
Oct 13 09:48:46.072: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 18.079611956s
Oct 13 09:48:48.079: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 20.086499453s
Oct 13 09:48:50.087: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 22.094405184s
Oct 13 09:48:52.098: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 24.105475345s
Oct 13 09:48:54.108: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 26.115741658s
Oct 13 09:48:56.117: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 28.124893913s
Oct 13 09:48:58.127: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 30.134633358s
Oct 13 09:49:00.137: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 32.144367474s
Oct 13 09:49:02.141: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 34.148473412s
Oct 13 09:49:04.150: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 36.157241569s
Oct 13 09:49:06.156: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 38.163996632s
Oct 13 09:49:08.169: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 40.176627455s
Oct 13 09:49:10.197: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 42.204908262s
Oct 13 09:49:12.208: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 44.215467635s
Oct 13 09:49:14.218: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.22602386s
STEP: Saw pod success
Oct 13 09:49:14.218: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038" satisfied condition "Succeeded or Failed"
Oct 13 09:49:14.218: INFO: Deleting pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038"
Oct 13 09:49:14.262: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" to be fully deleted
Oct 13 09:49:14.275: INFO: Creating resource for inline volume
Oct 13 09:49:14.275: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct 13 09:49:14.276: INFO: Deleting pod "pod-subpath-test-inlinevolume-2tt9" in namespace "e2e-provisioning-5038"
Oct 13 09:49:14.349: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038" to be "Succeeded or Failed"
Oct 13 09:49:14.369: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 20.111261ms
Oct 13 09:49:16.379: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030590341s
Oct 13 09:49:18.400: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051534837s
Oct 13 09:49:20.412: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063376148s
Oct 13 09:49:22.433: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084303031s
Oct 13 09:49:24.443: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09435229s
STEP: Saw pod success
Oct 13 09:49:24.443: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038" satisfied condition "Succeeded or Failed"
Oct 13 09:49:24.443: INFO: Deleting pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038"
Oct 13 09:49:24.459: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-5038" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:27.236: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:26.795: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__Ephemeralstorage_When_pod_refers_to_non-existent_ephemeral_storage_should_allow_deletion_of_pod_with_invalid_volume___secret__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 127.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:08.663: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node___Feature_Example__Secret_should_create_a_pod_that_reads_a_secret__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.5s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:04.114: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:03.798: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:03.463: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:03.127: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:02.748: INFO: Driver hostPath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:48:02.436: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_adopt_matching_orphans_and_release_non-matching_pods__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___late-binding___ephemeral_should_create_read-only_inline_ephemeral_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 262.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:57.134: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:56.778: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:56.456: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message__LinuxOnly__from_file_when_pod_succeeds_and_TerminationMessagePolicy_FallbackToLogsOnError_is_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 34.2s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:56.127: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:55.759: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:55.448: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_0.0.0.0_should_support_forwarding_over_websockets__Skipped_Proxy___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:44.091: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Mount_propagation_should_propagate_mounts_within_defined_scopes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:37.993: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_clean_up_of_stale_dummy_VM_for_dynamically_provisioned_pvc_using_SPBM_policy__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:47:37.420: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:47:37.670632    7043 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:47:37.670: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:47:37.679: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-8444" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Probing_container_should_be_restarted_startup_probe_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 104.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:32.484: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:32.163: INFO: Driver nfs doesn't support Block -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:31.836: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:31.445: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.8s

_sig-network__Services_should_find_a_service_from_listing_all_namespaces__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__PV_Protection_Verify__immediate__deletion_of_a_PV_that_is_not_bound_to_a_PVC__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:27.237: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.9s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:26.154: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__ConfigMap_binary_data_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 25.1s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:00.724: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:47:00.281: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__CronJob_should_be_able_to_schedule_after_more_than_100_missed_schedule__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 61.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:59.474: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:59.162: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:58.843: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 113.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:55.528: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__vsphere_statefulset__Feature_vsphere__vsphere_statefulset_testing__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_statefulsets.go:64]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] vsphere statefulset [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename vsphere-statefulset
Oct 13 09:46:54.956: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:46:55.117719    5302 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:46:55.117: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vsphere statefulset [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_statefulsets.go:63
Oct 13 09:46:55.125: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] vsphere statefulset [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-vsphere-statefulset-7657" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_statefulsets.go:64]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-network__DNS_should_provide_DNS_for_services___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 48.6s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:44.366: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_honor_timeout__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:40.863: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:40.506: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:40.122: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:39.746: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.3s

_sig-storage__PersistentVolumes-local___Volume_type__dir-bindmounted__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.7s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:16.714: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:16.389: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:16.064: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:15.740: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:15.379: INFO: Driver "cinder" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping

Stderr
_sig-api-machinery__Garbage_collector_should_keep_the_rc_around_until_all_its_pods_are_deleted_if_the_deleteOptions_says_so__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 44.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:14.967: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Projected_secret_should_be_able_to_mount_in_a_volume_regardless_of_a_different_secret_existing_with_same_name_in_different_namespace__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:10.709: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:46:10.394: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-node__ConfigMap_should_run_through_a_ConfigMap_lifecycle__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.3s

_sig-storage__PersistentVolumes_NFS_when_invoking_the_Recycle_reclaim_policy_should_test_that_a_PV_becomes_Available_and_is_clean_after_the_PVC_is_deleted.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 126.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:49.799: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:49.514: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:49.106: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:48.667: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:48.286: INFO: Driver nfs doesn't support Block -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping

Stderr
_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_priority_class_scope__quota_set_to_pod_count__1__against_a_pod_with_different_priority_class__ScopeSelectorOpNotIn_.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 4.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:42.950: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:42.608: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "csi-hostpath" does not define supported mount option - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:45:41.811: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:45:42.102646    2512 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:45:42.102: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180
Oct 13 09:45:42.109: INFO: Driver "csi-hostpath" does not define supported mount option - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-4538" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "csi-hostpath" does not define supported mount option - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "cinder" does not support populate data from snapshot - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:45:41.027: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:45:41.209952    2500 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:45:41.210: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:201
Oct 13 09:45:41.218: INFO: Driver "cinder" does not support populate data from snapshot - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-9489" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "cinder" does not support populate data from snapshot - skipping

Stderr
_sig-storage__Projected_secret_should_be_consumable_from_pods_in_volume_with_mappings__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 41.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:25.988: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 132.0s

_sig-node__Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message__LinuxOnly__if_TerminationMessagePath_is_set__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 22.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:03.492: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:45:03.088: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.3s

_sig-storage__CSI_mock_volume_storage_capacity_unlimited__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 126.0s

_sig-api-machinery__client-go_should_negotiate_watch_and_report_errors_with_accept__application/vnd.kubernetes.protobuf___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

_sig-storage__Subpath_Atomic_writer_volumes_should_support_subpaths_with_projected_pod__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.2s

_sig-node__Kubelet_when_scheduling_a_busybox_command_that_always_fails_in_a_pod_should_have_an_terminated_reason__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 28.7s

_sig-network__DNS_should_provide_DNS_for_pods_for_Hostname__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:57.017: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-apps___Feature_TTLAfterFinished__job_should_be_deleted_once_it_finishes_after_TTL_seconds__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:51.803: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:51.470: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 176.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:40.639: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:40.277: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:39.929: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:39.645: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_the_allowed_zones_specified_in_storage_class_when_the_datastore_under_the_zone_is_present_in_another_datacenter__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:43:39.045: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:43:39.295407 1046284 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:43:39.295: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:43:39.307: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-5119" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-apps__Deployment_test_Deployment_ReplicaSet_orphaning_and_adoption_regarding_controllerRef__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 48.9s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:34.507: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:34.158: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:33.780: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:33.496: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:33.190: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:32.859: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:32.551: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__NodeLease_when_the_NodeLease_feature_is_enabled_should_have_OwnerReferences_set__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:31.227: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 29.7s

_sig-apps__Deployment_deployment_reaping_should_cascade_to_its_replica_sets_and_pods__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 30.4s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:08.129: INFO: Driver "cinder" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:07.758: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:07.406: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:06.984: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:06.573: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:06.218: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_podname_only__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.1s

_sig-node__AppArmor_load_AppArmor_profiles_should_enforce_an_AppArmor_profile__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/framework/skipper/skipper.go:291]: Only supported for node OS distro [gci ubuntu] (not custom)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-node] AppArmor
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename apparmor
Oct 13 09:43:03.761: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:43:03.990282 1044831 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:43:03.990: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] load AppArmor profiles
  k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:32
Oct 13 09:43:04.002: INFO: Only supported for node OS distro [gci ubuntu] (not custom)
[AfterEach] load AppArmor profiles
  k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:36
[AfterEach] [sig-node] AppArmor
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-apparmor-900" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/framework/skipper/skipper.go:291]: Only supported for node OS distro [gci ubuntu] (not custom)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 09:43:03.011: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:43:03.185013 1044818 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:43:03.185: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 09:43:03.191: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-6606" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-node__Security_Context_When_creating_a_container_with_runAsNonRoot_should_run_with_an_explicit_non-root_user_ID__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 28.9s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:43:01.921: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__ServerSideApply_should_create_an_applied_object_if_it_does_not_already_exist__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 160.0s

_sig-storage__PersistentVolumes_vsphere__Feature_vsphere__should_test_that_deleting_the_PV_before_the_pod_does_not_cause_pod_deletion_to_fail_on_vsphere_volume_detach__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pv
Oct 13 09:42:53.267: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:42:53.441407 1044402 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:42:53.441: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
Oct 13 09:42:53.445: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pv-9480" for this suite.
[AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:112
Oct 13 09:42:53.463: INFO: AfterEach: Cleaning up test resources
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 39.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:47.752: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:47.363: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_perform_rolling_updates_and_roll_backs_of_template_modifications_with_PVCs__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 233.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:46.907: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:46.576: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:46.194: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:45.805: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_up_with_two_metrics_of_type_Pod_from_Stackdriver__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 09:42:45.430: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:45.042: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSIServiceAccountToken_token_should_be_plumbed_down_when_csiServiceAccountTokenEnabled=true__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 205.0s

_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_with_defaultMode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-api-machinery__Watchers_should_be_able_to_start_watching_from_a_specific_resource_version__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_down_with_External_Metric_with_target_value_from_Stackdriver__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 09:42:35.852: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:35.529: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:35.226: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-network__Proxy_version_v1_A_set_of_valid_responses_are_returned_for_both_pod_and_service_ProxyWithPath__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:31.313: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 151.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:22.330: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:21.957: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:18.900: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:18.524: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSI_workload_information_using_mock_driver_contain_ephemeral=true_when_using_inline_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 385.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.3s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:42:00.009: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:59.642: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:59.334: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:58.966: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__API_priority_and_fairness_should_ensure_that_requests_can't_be_drowned_out__priority___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:100]: skipping test until flakiness is resolved
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-api-machinery] API priority and fairness
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename apf
Oct 13 09:41:58.390: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:41:58.602352 1042481 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:41:58.602: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that requests can't be drowned out (priority) [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:98
[AfterEach] [sig-api-machinery] API priority and fairness
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-apf-7242" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:100]: skipping test until flakiness is resolved

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:57.840: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes_GCEPD_should_test_that_deleting_the_PV_before_the_pod_does_not_cause_pod_deletion_to_fail_on_PD_detach__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.2s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pv
Oct 13 09:41:57.196: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:41:57.482584 1041865 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:41:57.483: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:77
Oct 13 09:41:57.493: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pv-3949" for this suite.
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:111
Oct 13 09:41:57.505: INFO: AfterEach: Cleaning up test resources
Oct 13 09:41:57.505: INFO: pvc is nil
Oct 13 09:41:57.505: INFO: pv is nil
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:56.309: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver cinder doesn't publish storage capacity -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:56.279: INFO: Driver cinder doesn't publish storage capacity -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver cinder doesn't publish storage capacity -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:55.961: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:55.912: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:55.499: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Server_request_timeout_the_request_should_be_served_with_a_default_timeout_if_the_specified_timeout_in_the_request_URL_exceeds_maximum_allowed__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:54.032: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:53.676: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Pods_should_support_remote_command_execution_over_websockets__NodeConformance___Conformance___Skipped_Proxy___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 25.1s

_sig-node__Security_Context_When_creating_a_pod_with_privileged_should_run_the_container_as_privileged_when_true__LinuxOnly___NodeFeature_HostAccess___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:48.616: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Downward_API_should_provide_host_IP_as_an_env_var__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__Volume_Provisioning_On_Clustered_Datastore__Feature_vsphere__verify_static_provisioning_on_clustered_datastore__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-provision
Oct 13 09:41:19.991: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:41:20.146030 1041080 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:41:20.146: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:52
Oct 13 09:41:20.149: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-provision-814" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-network__SCTP__Feature_SCTP___LinuxOnly__should_allow_creating_a_basic_SCTP_service_with_pod_and_endpoints__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.7s

_sig-instrumentation__Events_API_should_delete_a_collection_of_events__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:17.119: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:16.701: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_down_with_Custom_Metric_of_type_Pod_from_Stackdriver_with_Prometheus__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 09:41:16.314: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:15.879: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 82.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:12.767: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:12.464: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV_with_storage_policy_specified_in_storage_class_in_waitForFirstConsumer_binding_mode__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:41:11.848: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:41:12.067360 1040597 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:41:12.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:41:12.071: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-1147" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:11.255: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:10.904: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__vsphere_cloud_provider_stress__Feature_vsphere__vsphere_stress_tests__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_stress.go:61]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename vcp-stress
Oct 13 09:41:10.257: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:41:10.566408 1040556 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:41:10.566: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_stress.go:60
Oct 13 09:41:10.570: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-vcp-stress-2848" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_stress.go:61]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Variable_Expansion_should_allow_substituting_values_in_a_container's_args__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 82.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:09.549: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:09.225: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_persistent_volume_claim_with_a_storage_class__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 11.8s

_sig-storage__PersistentVolumes-local__Pod_with_node_different_from_PV's_NodeAffinity_should_fail_scheduling_due_to_different_NodeSelector__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 5.4s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:03.357: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:03.000: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:02.649: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:02.203: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext3 -- skipping

Stderr
_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_as_non-root_with_defaultMode_and_fsGroup_set__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.3s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:00.946: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:41:00.629: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_version_should_check_is_all_data_is_printed___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:59.494: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-network__DNS_should_support_configurable_pod_resolv.conf__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 57.4s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:58.105: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:57.750: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_support_inline_execution_and_attach__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 123.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:49.905: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:49.549: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 68.0s

_sig-cli__Kubectl_client_Kubectl_client-side_validation_should_create/apply_a_valid_CR_for_CRD_with_validation_schema__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 19.1s

_sig-node__Sysctls__LinuxOnly___NodeConformance__should_support_unsafe_sysctls_which_are_actually_allowed__MinimumKubeletVersion_1.21___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.9s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:26.219: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:25.869: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:25.536: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-auth__Metadata_Concealment_should_run_a_check-metadata-concealment_job_to_completion__Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/auth/metadata_concealment.go:35]: Only supported for providers [gce] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-auth] Metadata Concealment
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename metadata-concealment
Oct 13 09:40:25.003: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:40:25.184032 1038423 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:40:25.184: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a check-metadata-concealment job to completion [Skipped:gce] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/auth/metadata_concealment.go:34
Oct 13 09:40:25.187: INFO: Only supported for providers [gce] (not openstack)
[AfterEach] [sig-auth] Metadata Concealment
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-metadata-concealment-1153" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/auth/metadata_concealment.go:35]: Only supported for providers [gce] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:24.521: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:24.212: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:23.836: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_invalid_zone_specified_in_storage_class_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:40:23.299: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:40:23.528084 1038370 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:40:23.528: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:40:23.533: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-4254" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:22.761: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Secrets_should_be_consumable_from_pods_in_volume_with_mappings__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:18.199: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:17.871: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:17.517: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:17.181: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-node__Container_Runtime_blackbox_test_when_running_a_container_with_a_new_image_should_not_be_able_to_pull_image_from_invalid_registry__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 45.3s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:16.436: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:16.104: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:15.782: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:15.458: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__API_priority_and_fairness_should_ensure_that_requests_can_be_classified_by_adding_FlowSchema_and_PriorityLevelConfiguration__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 2.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:14.163: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:13.865: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:13.533: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:13.217: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:12.892: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:11.184: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__tmpfs__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 48.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:10.834: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ServerSideApply_should_ignore_conflict_errors_if_force_apply_is_used__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.2s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:09.332: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_the_allowed_zones_and_datastore_specified_in_storage_class__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:40:08.790: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:40:09.028353 1037785 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:40:09.029: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:40:09.034: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-9705" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:08.053: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_mutate_custom_resource__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 60.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:40:07.641: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-bindmounted__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 7.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 09:40:00.543: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:40:00.957352 1037211 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:40:00.957: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: dir-bindmounted]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Oct 13 09:40:03.101: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb && mount --bind /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb] Namespace:e2e-persistent-local-volumes-test-8619 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-jlx5b ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 09:40:03.274: INFO: Creating a PV followed by a PVC
Oct 13 09:40:03.293: INFO: Waiting for PV local-pv9zn2j to bind to PVC pvc-h7f9z
Oct 13 09:40:03.293: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h7f9z] to have phase Bound
Oct 13 09:40:03.298: INFO: PersistentVolumeClaim pvc-h7f9z found but phase is Pending instead of Bound.
Oct 13 09:40:05.306: INFO: PersistentVolumeClaim pvc-h7f9z found but phase is Pending instead of Bound.
Oct 13 09:40:07.313: INFO: PersistentVolumeClaim pvc-h7f9z found and phase=Bound (4.02005597s)
Oct 13 09:40:07.313: INFO: Waiting up to 3m0s for PersistentVolume local-pv9zn2j to have phase Bound
Oct 13 09:40:07.320: INFO: PersistentVolume local-pv9zn2j found and phase=Bound (7.39606ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 09:40:07.331: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: dir-bindmounted]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 09:40:07.332: INFO: Deleting PersistentVolumeClaim "pvc-h7f9z"
Oct 13 09:40:07.350: INFO: Deleting PersistentVolume "local-pv9zn2j"
STEP: Removing the test directory
Oct 13 09:40:07.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb && rm -r /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb] Namespace:e2e-persistent-local-volumes-test-8619 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-jlx5b ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-8619" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-node__Pods_should_contain_environment_variables_for_services__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.7s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:38.882: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:38.499: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_FSGroupPolicy__LinuxOnly__should_modify_fsGroup_if_fsGroupPolicy=default__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 189.0s

_sig-node__Docker_Containers_should_use_the_image_defaults_if_command_and_args_are_blank__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 52.9s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:29.564: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:29.233: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__Projected_secret_should_be_consumable_from_pods_in_volume_as_non-root_with_defaultMode_and_fsGroup_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:26.764: INFO: Driver cinder doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:26.375: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.2s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:19.569: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Container_Runtime_blackbox_test_when_running_a_container_with_a_new_image_should_be_able_to_pull_image__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.2s

_sig-storage__PersistentVolumes-local___Volume_type__dir__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 36.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:01.572: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:01.251: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 52.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:39:00.122: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:59.780: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.9s

_sig-node__Security_Context_should_support_seccomp_runtime/default__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 29.2s

_sig-auth___Feature_NodeAuthenticator__The_kubelet_can_delegate_ServiceAccount_tokens_to_the_API_server__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:12.438: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:11.940: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-auth___Feature_NodeAuthorizer__A_node_shouldn't_be_able_to_delete_another_node__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:10.677: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:10.320: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:09.992: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:09.647: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:09.331: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:09.024: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:08.696: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:08.360: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:07.947: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 39.8s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:01.447: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:01.035: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:00.596: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:38:00.206: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__Garbage_collector_should_not_delete_dependents_that_have_both_valid_owner_and_owner_that's_waiting_for_dependents_to_be_deleted__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 36.7s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:58.831: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_describe_should_check_if_kubectl_describe_prints_relevant_information_for_rc_and_pods___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 66.0s

_sig-storage__PV_Protection_Verify_that_PV_bound_to_a_PVC_is_not_removed_immediately__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 5.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:54.642: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:54.619: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:54.212: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:53.807: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-node__Kubelet_when_scheduling_a_busybox_command_that_always_fails_in_a_pod_should_be_possible_to_delete__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:52.607: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:52.191: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link-bindmounted__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 5.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 09:37:49.493: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:37:49.731443 1032321 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:37:49.731: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: dir-link-bindmounted]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Oct 13 09:37:51.820: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend && mount --bind /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend && ln -s /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc] Namespace:e2e-persistent-local-volumes-test-2337 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-8jkpv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 09:37:51.974: INFO: Creating a PV followed by a PVC
Oct 13 09:37:52.013: INFO: Waiting for PV local-pvzhlcx to bind to PVC pvc-xgw9b
Oct 13 09:37:52.013: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xgw9b] to have phase Bound
Oct 13 09:37:52.017: INFO: PersistentVolumeClaim pvc-xgw9b found but phase is Pending instead of Bound.
Oct 13 09:37:54.027: INFO: PersistentVolumeClaim pvc-xgw9b found and phase=Bound (2.01395801s)
Oct 13 09:37:54.027: INFO: Waiting up to 3m0s for PersistentVolume local-pvzhlcx to have phase Bound
Oct 13 09:37:54.031: INFO: PersistentVolume local-pvzhlcx found and phase=Bound (3.930299ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 09:37:54.039: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: dir-link-bindmounted]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 09:37:54.039: INFO: Deleting PersistentVolumeClaim "pvc-xgw9b"
Oct 13 09:37:54.051: INFO: Deleting PersistentVolume "local-pvzhlcx"
STEP: Removing the test directory
Oct 13 09:37:54.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc && umount /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend && rm -r /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend] Namespace:e2e-persistent-local-volumes-test-2337 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-8jkpv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-2337" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-storage__PersistentVolumes__Feature_vsphere__Feature_LabelSelector__Selector-Label_Volume_Binding_vsphere__Feature_vsphere__should_bind_volume_with_claim_for_given_label__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pvc_label_selector.go:65]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pvclabelselector
Oct 13 09:37:48.652: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:37:48.904219 1032308 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:37:48.904: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pvc_label_selector.go:64
Oct 13 09:37:48.910: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pvclabelselector-1693" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pvc_label_selector.go:65]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:48.129: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:47.777: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:47.412: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir__Set_fsGroup_for_local_volume_should_set_different_fsGroup_for_second_pod_if_first_pod_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 9.2s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistent-local-volumes-test
Oct 13 09:37:43.064: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:37:43.257117 1032128 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:37:43.257: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: dir]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Oct 13 09:37:45.337: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b7b8c4d8-0a14-4c44-b0bd-57517ab47b7e] Namespace:e2e-persistent-local-volumes-test-3870 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-8qtdt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
STEP: Creating local PVCs and PVs
Oct 13 09:37:45.467: INFO: Creating a PV followed by a PVC
Oct 13 09:37:45.488: INFO: Waiting for PV local-pvwk9hf to bind to PVC pvc-r7pjx
Oct 13 09:37:45.488: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r7pjx] to have phase Bound
Oct 13 09:37:45.499: INFO: PersistentVolumeClaim pvc-r7pjx found but phase is Pending instead of Bound.
Oct 13 09:37:47.504: INFO: PersistentVolumeClaim pvc-r7pjx found but phase is Pending instead of Bound.
Oct 13 09:37:49.516: INFO: PersistentVolumeClaim pvc-r7pjx found but phase is Pending instead of Bound.
Oct 13 09:37:51.522: INFO: PersistentVolumeClaim pvc-r7pjx found and phase=Bound (6.03387078s)
Oct 13 09:37:51.522: INFO: Waiting up to 3m0s for PersistentVolume local-pvwk9hf to have phase Bound
Oct 13 09:37:51.528: INFO: PersistentVolume local-pvwk9hf found and phase=Bound (6.224597ms)
[BeforeEach] Set fsGroup for local volume
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286
Oct 13 09:37:51.536: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: dir]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Oct 13 09:37:51.536: INFO: Deleting PersistentVolumeClaim "pvc-r7pjx"
Oct 13 09:37:51.551: INFO: Deleting PersistentVolume "local-pvwk9hf"
STEP: Removing the test directory
Oct 13 09:37:51.582: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b7b8c4d8-0a14-4c44-b0bd-57517ab47b7e] Namespace:e2e-persistent-local-volumes-test-3870 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-8qtdt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
[AfterEach] [sig-storage] PersistentVolumes-local 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistent-local-volumes-test-3870" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:42.567: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 109.0s

_sig-network__Networking_IPerf2__Feature_Networking-Performance__should_run_iperf2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 162.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:33.360: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-node__Security_Context_should_support_seccomp_unconfined_on_the_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:25.345: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:24.973: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 130.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:18.918: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:18.544: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 179.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:13.878: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:13.547: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:13.227: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:12.856: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:12.481: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-node__KubeletManagedEtcHosts_should_test_kubelet_managed_/etc/hosts_file__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 36.2s

_sig-storage__Zone_Support__Feature_vsphere__Verify_dynamically_created_pv_with_multiple_zones_specified_in_the_storage_class,_shows_both_the_zones_on_its_labels__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:37:05.741: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:37:05.927847 1030432 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:37:05.927: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:37:05.934: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-8344" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:37:05.229: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_localhost_that_expects_a_client_request_should_support_a_client_that_connects,_sends_NO_DATA,_and_disconnects__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_perform_canary_updates_and_phased_rolling_updates_of_template_modifications__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 201.0s

_sig-node__Security_Context_when_creating_containers_with_AllowPrivilegeEscalation_should_allow_privilege_escalation_when_not_explicitly_set_and_uid_!=_0__LinuxOnly___NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 39.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 34.5s

_sig-node__Probing_container_should_be_restarted_with_a_exec__cat_/tmp/health__liveness_probe__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 73.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:24.457: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:24.071: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:23.647: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:23.247: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:22.822: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:22.422: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:22.016: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:21.670: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:21.293: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:20.978: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-auth___Feature_NodeAuthorizer__Getting_a_non-existent_secret_should_exit_with_the_Forbidden_error,_not_a_NotFound_error__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-apps__Deployment_Deployment_should_have_a_working_scale_subresource__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 89.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:17.875: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:36:17.551: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__blockfswithoutformat__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.8s

_sig-api-machinery__CustomResourceDefinition_resources__Privileged_ClusterAdmin__Simple_CustomResourceDefinition_listing_custom_resource_definition_objects_works___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 7.9s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:55.077: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-node__InitContainer__NodeConformance__should_invoke_init_containers_on_a_RestartAlways_pod__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.3s

_sig-storage__CSI_mock_volume_CSI_Volume_Snapshots_secrets__Feature_VolumeSnapshotDataSource__volume_snapshot_create/delete_with_secrets__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 141.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:46.265: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:45.930: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:22.598: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:22.201: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Secrets_should_fail_to_create_secret_due_to_empty_secret_key__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-api-machinery__client-go_should_negotiate_watch_and_report_errors_with_accept__application/vnd.kubernetes.protobuf,application/json___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:20.490: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 58.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:19.065: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Secrets_should_be_consumable_from_pods_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.2s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:18.747: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:18.653: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:18.327: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:18.288: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:17.943: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:17.861: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:17.525: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:17.180: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:16.843: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ntfs -- skipping

Stderr
_sig-auth___Feature_NodeAuthorizer__Getting_a_secret_for_a_workload_the_node_has_access_to_should_succeed__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:14.421: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_not_deadlock_when_a_pod's_predecessor_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 132.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:13.029: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.7s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:35:07.056: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__CronJob_should_delete_successful_finished_jobs_with_limit_of_one_successful_job__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 83.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:54.780: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__ConfigMap_should_be_consumable_in_multiple_volumes_in_the_same_pod__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 21.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:53.122: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:52.758: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-bindmounted__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.5s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:32.135: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:31.821: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_be_able_to_deny_custom_resource_creation,_update_and_deletion__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 48.8s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:31.345: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:30.929: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:30.553: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:30.239: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:29.891: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__DisruptionController_should_block_an_eviction_until_the_PDB_is_updated_to_allow_it__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:23.459: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:23.102: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:22.741: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Subpath_Atomic_writer_volumes_should_support_subpaths_with_downward_pod__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:34:17.509: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-auth___Feature_NodeAuthenticator__The_kubelet's_main_port_10250_should_reject_requests_with_no_credentials__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:44.576: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:44.135: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:43.698: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.7s

_sig-apps__Job_should_delete_pods_when_suspended__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 85.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:29.506: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:29.018: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:28.589: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:28.127: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:27.681: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Variable_Expansion_should_allow_composing_env_vars_into_new_env_vars__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:27.266: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSIStorageCapacity_CSIStorageCapacity_disabled__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 191.0s

_sig-api-machinery__Garbage_collector_should_delete_pods_created_by_rc_when_not_orphaning__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 50.9s

_sig-node__Pods_Extended_Pods_Set_QOS_Class_should_be_set_on_Pods_with_matching_resource_requests_and_limits_for_memory_and_cpu__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-api-machinery__CustomResourceDefinition_resources__Privileged_ClusterAdmin__should_include_custom_resource_definition_resources_in_discovery_documents__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_Snapshot__delete_policy___snapshottable_Feature_VolumeSnapshotDataSource__volume_snapshot_controller__should_check_snapshot_fields,_check_restore_correctly_works_after_modifying_source_data,_check_deletion__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 226.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:15.872: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:15.471: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:15.130: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:14.776: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:14.394: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:13.989: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:13.594: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:13.204: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_multiple_CRDs_of_different_groups__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 154.0s

_sig-storage__Volume_Placement__Feature_vsphere__should_create_and_delete_pod_with_the_same_volume_source_on_the_same_worker_node__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-placement
Oct 13 09:33:12.346: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:33:12.829517 1021734 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:33:12.829: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55
Oct 13 09:33:12.839: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-placement-2538" for this suite.
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__PersistentVolumes__Feature_vsphere__Feature_ReclaimPolicy__persistentvolumereclaim_vsphere__Feature_vsphere__should_retain_persistent_volume_when_reclaimPolicy_set_to_retain_when_associated_claim_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistentvolumereclaim
Oct 13 09:33:11.637: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:33:11.895798 1021722 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:33:11.895: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:47
[BeforeEach] persistentvolumereclaim:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:54
Oct 13 09:33:11.918: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] persistentvolumereclaim:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:63
STEP: running testCleanupVSpherePersistentVolumeReclaim
[AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistentvolumereclaim-3487" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:10.989: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:10.660: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-node__Security_Context_when_creating_containers_with_AllowPrivilegeEscalation_should_not_allow_privilege_escalation_when_false__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 37.1s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:05.719: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:33:05.299: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__ReplicaSet_Replicaset_should_have_a_working_scale_subresource__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 27.9s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:58.926: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.1s

_sig-apps__ReplicationController_should_serve_a_basic_image_on_each_replica_with_a_public_image___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 32.8s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.9s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:41.540: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:41.197: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:40.849: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:40.485: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:40.157: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:39.729: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__CustomResourceConversionWebhook__Privileged_ClusterAdmin__should_be_able_to_convert_a_non_homogeneous_list_of_CRs__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 34.3s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:36.909: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:36.474: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext3 -- skipping

Stderr
_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_down_with_Custom_Metric_of_type_Pod_from_Stackdriver__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 09:32:36.058: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.1s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:20.886: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 167.0s

_sig-storage__Secrets_should_be_consumable_from_pods_in_volume_with_defaultMode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.3s

_sig-storage__PersistentVolumes_NFS_with_Single_PV_-_PVC_pairs_create_a_PV_and_a_pre-bound_PVC__test_write_access__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 102.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:32:01.645: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-node__Downward_API_should_provide_pod_name,_namespace_and_IP_address_as_env_vars__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 39.1s

_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__Should_recreate_evicted_statefulset__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:52.729: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:52.405: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-autoscaling___Feature_HPA__Horizontal_pod_autoscaling__scale_resource__CPU__ReplicationController_light_Should_scale_from_1_pod_to_2_pods__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 900.0s

Failed:
Oct 13 09:39:54.415: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:39:54.415: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:40:14.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:40:24.146: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:40:24.146: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:40:24.444: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:40:24.444: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:40:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:40:54.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:40:54.157: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:40:54.157: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:40:54.476: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:40:54.477: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:41:14.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:41:24.169: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:41:24.169: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:41:24.519: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:41:24.519: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:41:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:41:54.030: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:41:54.179: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:41:54.179: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:41:54.180: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:41:54.180: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:41:54.552: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:41:54.553: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:42:14.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:42:24.187: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:42:24.187: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:42:24.604: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:42:24.605: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:42:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:42:54.036: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:42:54.196: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:42:54.196: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:42:54.642: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:42:54.643: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:43:14.026: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:43:24.223: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:43:24.223: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:43:24.707: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:43:24.707: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:43:34.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:43:54.028: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:43:54.230: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:43:54.230: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:43:54.781: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:43:54.781: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:44:14.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:44:24.239: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:44:24.239: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:44:24.820: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:44:24.820: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:44:34.028: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:44:54.033: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:44:54.250: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:44:54.250: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:44:54.251: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:44:54.251: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:44:54.874: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:44:54.874: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:45:14.023: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:45:24.266: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:45:24.266: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:45:24.901: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:45:24.902: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:45:34.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:45:54.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:45:54.279: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:45:54.280: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:45:54.929: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:45:54.929: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:46:14.029: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:46:24.294: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:46:24.294: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:46:24.305: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:46:24.305: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:46:24.974: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:46:24.974: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:46:34.024: INFO: waiting for 2 replicas (current: 1)

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename horizontal-pod-autoscaling
Oct 13 09:31:48.594: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:31:48.787738 1018869 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:31:48.787: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should scale from 1 pod to 2 pods [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/horizontal_pod_autoscaling.go:69
STEP: Running consuming RC rc-light via /v1, Kind=ReplicationController with 1 replicas
STEP: creating replication controller rc-light in namespace e2e-horizontal-pod-autoscaling-8934
I1013 09:31:48.835690 1018869 runners.go:190] Created replication controller with name: rc-light, namespace: e2e-horizontal-pod-autoscaling-8934, replica count: 1
I1013 09:31:58.889251 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1013 09:32:08.889587 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1013 09:32:18.890733 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1013 09:32:28.891723 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: Running controller
STEP: creating replication controller rc-light-ctrl in namespace e2e-horizontal-pod-autoscaling-8934
I1013 09:32:28.937551 1018869 runners.go:190] Created replication controller with name: rc-light-ctrl, namespace: e2e-horizontal-pod-autoscaling-8934, replica count: 1
I1013 09:32:38.988064 1018869 runners.go:190] rc-light-ctrl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1013 09:32:48.989163 1018869 runners.go:190] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 13 09:32:53.989: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1
Oct 13 09:32:53.993: INFO: RC rc-light: consume 150 millicores in total
Oct 13 09:32:53.993: INFO: RC rc-light: sending request to consume 0 millicores
Oct 13 09:32:53.993: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=0&requestSizeMillicores=100  }
Oct 13 09:32:54.002: INFO: RC rc-light: setting consumption to 150 millicores in total
Oct 13 09:32:54.002: INFO: RC rc-light: consume 0 MB in total
Oct 13 09:32:54.002: INFO: RC rc-light: setting consumption to 0 MB in total
Oct 13 09:32:54.002: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:32:54.002: INFO: RC rc-light: consume custom metric 0 in total
Oct 13 09:32:54.002: INFO: RC rc-light: setting bump of metric QPS to 0 in total
Oct 13 09:32:54.002: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:32:54.002: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:32:54.002: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:32:54.018: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:33:14.053: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:33:24.003: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:33:24.003: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:33:24.015: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:33:24.015: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:33:24.016: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:33:24.016: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:33:34.035: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:33:54.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:34:14.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:34:24.018: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:34:24.018: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:34:24.018: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:34:24.018: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:34:24.018: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:34:24.018: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:34:34.030: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:34:54.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:35:14.026: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:35:24.026: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:35:24.026: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:35:24.026: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:35:24.026: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:35:24.026: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:35:24.026: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:35:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:35:54.030: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:35:54.043: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:35:54.043: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:35:54.043: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:35:54.043: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:35:54.071: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:35:54.071: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:36:14.026: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:36:24.059: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:36:24.059: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:36:24.059: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:36:24.059: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:36:24.109: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:36:24.109: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:36:34.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:36:54.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:36:54.074: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:36:54.074: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:36:54.075: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:36:54.075: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:36:54.148: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:36:54.148: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:37:14.022: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:37:24.083: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:37:24.083: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:37:24.083: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:37:24.083: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:37:24.199: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:37:24.199: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:37:34.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:37:54.028: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:37:54.092: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:37:54.092: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:37:54.092: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:37:54.092: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:37:54.236: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:37:54.236: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:38:14.022: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:38:24.100: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:38:24.100: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:38:24.110: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:38:24.110: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:38:24.280: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:38:24.280: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:38:34.023: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:38:54.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:38:54.111: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:38:54.111: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:38:54.118: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:38:54.118: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:38:54.313: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:38:54.314: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:39:14.029: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:39:24.120: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:39:24.120: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:39:24.126: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:39:24.127: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:39:24.357: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:39:24.357: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:39:34.023: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:39:54.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:39:54.130: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:39:54.130: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:39:54.137: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:39:54.137: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:39:54.415: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:39:54.415: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:40:14.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:40:24.146: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:40:24.146: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:40:24.444: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:40:24.444: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:40:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:40:54.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:40:54.157: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:40:54.157: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:40:54.476: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:40:54.477: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:41:14.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:41:24.169: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:41:24.169: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:41:24.519: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:41:24.519: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:41:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:41:54.030: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:41:54.179: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:41:54.179: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:41:54.180: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:41:54.180: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:41:54.552: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:41:54.553: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:42:14.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:42:24.187: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:42:24.187: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:42:24.604: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:42:24.605: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:42:34.025: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:42:54.036: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:42:54.196: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:42:54.196: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:42:54.642: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:42:54.643: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:43:14.026: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:43:24.223: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:43:24.223: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:43:24.707: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:43:24.707: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:43:34.027: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:43:54.028: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:43:54.230: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:43:54.230: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:43:54.781: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:43:54.781: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:44:14.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:44:24.239: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:44:24.239: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:44:24.820: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:44:24.820: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:44:34.028: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:44:54.033: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:44:54.250: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:44:54.250: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:44:54.251: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:44:54.251: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:44:54.874: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:44:54.874: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:45:14.023: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:45:24.266: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:45:24.266: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:45:24.901: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:45:24.902: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:45:34.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:45:54.024: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:45:54.279: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:45:54.280: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:45:54.929: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:45:54.929: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:46:14.029: INFO: waiting for 2 replicas (current: 1)
Oct 13 09:46:24.294: INFO: RC rc-light: sending request to consume 0 of custom metric QPS
Oct 13 09:46:24.294: INFO: ConsumeCustomMetric URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric  false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10  }
Oct 13 09:46:24.305: INFO: RC rc-light: sending request to consume 0 MB
Oct 13 09:46:24.305: INFO: ConsumeMem URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem  false durationSec=30&megabytes=0&requestSizeMegabytes=100  }
Oct 13 09:46:24.974: INFO: RC rc-light: sending request to consume 150 millicores
Oct 13 09:46:24.974: INFO: ConsumeCPU URL: {https   api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU  false durationSec=30&millicores=150&requestSizeMillicores=100  }
Oct 13 09:46:34.024: INFO: waiting for 2 replicas (current: 1)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:48.004: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:47.646: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:47.333: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:47.034: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:46.657: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:46.216: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Probing_container_should_not_be_ready_with_an_exec_readiness_probe_timeout__MinimumKubeletVersion_1.20___NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 107.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:23.282: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__client-go_should_negotiate_watch_and_report_errors_with_accept__application/json,application/vnd.kubernetes.protobuf___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:22.457: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:22.110: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-apps__Deployment_RecreateDeployment_should_delete_old_pods_and_create_new_ones__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 48.1s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:17.185: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:16.832: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_as_non-root_with_FSGroup__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.5s

_sig-storage__EmptyDir_volumes_when_FSGroup_is_specified__LinuxOnly___NodeFeature_FSGroup__new_files_should_be_created_with_FSGroup_ownership_when_container_is_root__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 80.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:16.262: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:15.896: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:15.765: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-apps__ReplicaSet_should_list_and_delete_a_collection_of_ReplicaSets__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 42.0s

_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_valid_objectSpaceReservation_and_iopsLimit_values_is_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:31:09.833: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:31:10.027229 1016999 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:31:10.027: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:31:10.031: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-4984" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:09.205: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:31:08.807: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.8s

_sig-storage__Subpath_Atomic_writer_volumes_should_support_subpaths_with_configmap_pod__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:51.044: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:50.709: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "cinder" does not support cloning - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:30:50.164: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:30:50.362282 1016183 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:30:50.362: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:239
Oct 13 09:30:50.370: INFO: Driver "cinder" does not support cloning - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-539" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "cinder" does not support cloning - skipping

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_configMap.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 28.9s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:46.489: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:46.135: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:38.556: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:38.242: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:199]: Not enough topologies in cluster -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename topology
Oct 13 09:30:37.509: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:30:37.794371 1015947 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:30:37.794: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:192
Oct 13 09:30:37.804: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:nova]
Oct 13 09:30:37.805: INFO: In-tree plugin kubernetes.io/cinder is not migrated, not validating any metrics
Oct 13 09:30:37.805: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-topology-9468" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:199]: Not enough topologies in cluster -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:36.942: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:36.500: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:36.101: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 147.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:26.267: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:25.849: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:25.453: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:25.112: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:24.741: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:24.411: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:24.070: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:23.695: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:23.372: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:22.983: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_the_allowed_zones,_datastore_and_storage_policy_specified_in_storage_class__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:30:22.416: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:30:22.625653 1015370 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:30:22.625: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:30:22.631: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-9431" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:21.847: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-autoscaling___HPA__Horizontal_pod_autoscaling__scale_resource__Custom_Metrics_from_Stackdriver__should_scale_down_with_Custom_Metric_of_type_Object_from_Stackdriver__Feature_CustomMetricsAutoscaling___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49
Oct 13 09:30:21.529: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver)
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link-bindmounted__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.4s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:30:18.074: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Downward_API_should_provide_container's_limits.cpu/memory_and_requests.cpu/memory_as_env_vars__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 39.5s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 53.1s

_sig-storage__Ephemeralstorage_When_pod_refers_to_non-existent_ephemeral_storage_should_allow_deletion_of_pod_with_invalid_volume___configmap__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 127.0s

_sig-apps__Deployment_deployment_should_support_proportional_scaling__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 91.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:50.741: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:50.347: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_support_exec_through_kubectl_proxy__Skipped_Proxy___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.4s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:46.143: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:45.632: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__Mounted_volume_expand_Should_verify_mounted_devices_can_be_resized__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:62]: Only supported for providers [aws gce] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Mounted volume expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename mounted-volume-expand
Oct 13 09:29:44.999: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:29:45.195860 1013393 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:29:45.195: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Mounted volume expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:61
Oct 13 09:29:45.200: INFO: Only supported for providers [aws gce] (not openstack)
[AfterEach] [sig-storage] Mounted volume expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-mounted-volume-expand-5105" for this suite.
[AfterEach] [sig-storage] Mounted volume expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:108
Oct 13 09:29:45.227: INFO: AfterEach: Cleaning up resources for mounted volume resize
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:62]: Only supported for providers [aws gce] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:44.336: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:43.945: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_return_command_exit_codes_running_a_successful_command__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 82.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:33.606: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__HostPath_should_support_subPath__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-network__Proxy_version_v1_should_proxy_logs_on_node_with_explicit_kubelet_port_using_proxy_subresource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:32.052: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-cli__Kubectl_client_Proxy_server_should_support_proxy_with_--port_0___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-node__Security_Context_When_creating_a_container_with_runAsUser_should_run_the_container_with_uid_65534__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.3s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:26.410: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Projected_secret_should_be_consumable_from_pods_in_volume_with_defaultMode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:21.388: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:21.075: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:20.657: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes_vsphere__Feature_vsphere__should_test_that_deleting_a_PVC_before_the_pod_does_not_cause_pod_deletion_to_fail_on_vsphere_volume_detach__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pv
Oct 13 09:29:19.966: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:29:20.238758 1012349 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:29:20.238: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
Oct 13 09:29:20.249: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pv-1138" for this suite.
[AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:112
Oct 13 09:29:20.269: INFO: AfterEach: Cleaning up test resources
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:241]: driver nfs does not support volume limits
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumelimits
Oct 13 09:29:19.129: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:29:19.385256 1012334 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:29:19.385: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that all csinodes have volume limits [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:238
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumelimits-5292" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:241]: driver nfs does not support volume limits

Stderr
_sig-storage__CSI_mock_volume_CSI_Volume_expansion_should_expand_volume_without_restarting_pod_if_nodeExpansion=off__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 184.0s

_sig-storage__Volume_FStype__Feature_vsphere__verify_fstype_-_ext3_formatted_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-fstype
Oct 13 09:29:16.050: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:29:16.255666 1012075 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:29:16.255: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:75
Oct 13 09:29:16.259: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-fstype-1909" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_return_command_exit_codes_running_a_failing_command__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 66.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:15.309: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:15.305: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:14.951: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:14.922: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:14.583: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:14.533: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:14.142: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:14.118: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:13.725: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:13.743: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:29:13.370: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__patching/updating_a_validating_webhook_should_work__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-cli__Kubectl_client_Kubectl_client-side_validation_should_create/apply_a_CR_with_unknown_fields_for_CRD_with_no_validation_schema__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 16.5s

_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__patching/updating_a_mutating_webhook_should_work__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.3s

_sig-node__Security_Context_When_creating_a_container_with_runAsUser_should_run_the_container_with_uid_0__LinuxOnly___NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:42.366: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__Job_should_run_a_job_to_completion_when_tasks_sometimes_fail_and_are_locally_restarted__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 56.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:33.748: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:33.326: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-node__PreStop_graceful_pod_terminated_should_wait_until_preStop_hook_completes_the_process__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.1s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:29.403: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:29.050: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:28.657: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:28.254: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:27.829: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_attach_test_using_mock_driver_should_require_VolumeAttach_for_drivers_with_attachment__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 176.0s

_sig-apps__ReplicaSet_should_surface_a_failure_condition_on_a_common_issue_like_exceeded_quota__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.8s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:16.216: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:15.792: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes_NFS_with_Single_PV_-_PVC_pairs_create_a_PVC_and_non-pre-bound_PV__test_write_access__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 82.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:11.487: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:11.091: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:10.720: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:10.439: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:10.136: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_priority_class_scope__cpu,_memory_quota_set__against_a_pod_with_same_priority_class.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 6.9s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:08.586: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-network__DNS_should_provide_DNS_for_the_cluster__Provider_GCE___Skipped_Proxy___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/network/dns.go:69]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] DNS
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename dns
Oct 13 09:28:08.064: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:28:08.225021 1009576 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:28:08.225: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE] [Skipped:Proxy] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/dns.go:68
Oct 13 09:28:08.229: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-network] DNS
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-dns-7419" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/network/dns.go:69]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:07.561: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:28:07.191: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__DNS_should_provide_DNS_for_the_cluster___Conformance___Skipped_Proxy___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 42.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:59.830: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:59.324: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:58.931: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 74.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:56.175: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:55.783: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:55.340: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:54.940: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:54.614: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:54.231: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:53.903: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:53.590: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:53.214: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:52.888: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext4 -- skipping

Stderr
_sig-network__EndpointSliceMirroring_should_mirror_a_custom_Endpoints_resource_through_create_update_and_delete__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 7.0s

_sig-storage__CSI_mock_volume_CSI_Snapshot_Controller_metrics__Feature_VolumeSnapshotDataSource__snapshot_controller_should_emit_pre-provisioned_CreateSnapshot,_CreateSnapshotAndReady,_and_DeleteSnapshot_metrics__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 128.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1786]: Snapshot controller metrics not found -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] CSI mock volume
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename csi-mock-volumes
Oct 13 09:27:42.390: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:27:42.618651 1008424 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:27:42.618: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1765
STEP: Building a driver namespace object, basename e2e-csi-mock-volumes-4421
Oct 13 09:27:42.806: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
STEP: deploying csi mock driver
Oct 13 09:27:43.035: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-attacher
Oct 13 09:27:43.048: INFO: creating *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.048: INFO: Define cluster role external-attacher-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.060: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.068: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-attacher-cfg-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.074: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-attacher-role-cfg
Oct 13 09:27:43.095: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-provisioner
Oct 13 09:27:43.108: INFO: creating *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.108: INFO: Define cluster role external-provisioner-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.122: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.141: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-provisioner-cfg-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.154: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-provisioner-role-cfg
Oct 13 09:27:43.176: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-resizer
Oct 13 09:27:43.189: INFO: creating *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.189: INFO: Define cluster role external-resizer-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.208: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.228: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-resizer-cfg-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.248: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-resizer-role-cfg
Oct 13 09:27:43.275: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-snapshotter
Oct 13 09:27:43.303: INFO: creating *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.303: INFO: Define cluster role external-snapshotter-runner-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.314: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.331: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.342: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection
Oct 13 09:27:43.358: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-mock
Oct 13 09:27:43.377: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.399: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.414: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.426: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.450: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.463: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.480: INFO: creating *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.501: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin
Oct 13 09:27:43.520: INFO: creating *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-4421
Oct 13 09:27:43.530: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin-snapshotter
Oct 13 09:27:43.548: INFO: waiting up to 4m0s for CSIDriver "csi-mock-e2e-csi-mock-volumes-4421"
Oct 13 09:27:43.561: INFO: waiting for CSIDriver csi-mock-e2e-csi-mock-volumes-4421 to register on node ostest-n5rnf-worker-0-j4pkp
W1013 09:28:25.191663 1008424 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
W1013 09:28:25.191699 1008424 metrics_grabber.go:151] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
Oct 13 09:28:25.191: INFO: Snapshot controller metrics not found -- skipping
STEP: Cleaning up resources
STEP: deleting the test namespace: e2e-csi-mock-volumes-4421
STEP: Waiting for namespaces [e2e-csi-mock-volumes-4421] to vanish
STEP: uninstalling csi mock driver
Oct 13 09:28:57.234: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-attacher
Oct 13 09:28:57.256: INFO: deleting *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.284: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.324: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-attacher-cfg-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.353: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-attacher-role-cfg
Oct 13 09:28:57.378: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-provisioner
Oct 13 09:28:57.403: INFO: deleting *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.460: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.477: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-provisioner-cfg-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.501: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-provisioner-role-cfg
Oct 13 09:28:57.522: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-resizer
Oct 13 09:28:57.538: INFO: deleting *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.552: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.574: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-resizer-cfg-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.587: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-resizer-role-cfg
Oct 13 09:28:57.601: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-snapshotter
Oct 13 09:28:57.611: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.620: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.646: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.666: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection
Oct 13 09:28:57.679: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-mock
Oct 13 09:28:57.691: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.711: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.728: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.744: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.762: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.790: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.802: INFO: deleting *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.826: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin
Oct 13 09:28:57.838: INFO: deleting *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-4421
Oct 13 09:28:57.852: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin-snapshotter
STEP: deleting the driver namespace: e2e-csi-mock-volumes-4421-6950
STEP: Waiting for namespaces [e2e-csi-mock-volumes-4421-6950] to vanish
[AfterEach] [sig-storage] CSI mock volume
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1786]: Snapshot controller metrics not found -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:41.852: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:41.454: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:35.190: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__tmpfs__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 53.9s

_sig-cli__Kubectl_client_Kubectl_label_should_update_the_label_on_a_resource___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.6s

_sig-cli__Kubectl_client_Kubectl_create_quota_should_reject_quota_with_invalid_scopes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:32.259: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:31.888: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:31.574: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:31.190: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-node__Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message__LinuxOnly__from_log_output_if_TerminationMessagePolicy_FallbackToLogsOnError_is_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 28.1s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:17.506: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:17.077: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Volume_FStype__Feature_vsphere__verify_invalid_fstype__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-fstype
Oct 13 09:27:16.463: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:27:16.645999 1007185 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:27:16.646: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:75
Oct 13 09:27:16.650: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-fstype-4616" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:15.794: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:15.417: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:15.085: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:14.670: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_support_port-forward__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:07.684: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:07.340: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:27:06.987: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Pods_should_support_retrieving_logs_from_the_container_over_websockets__NodeConformance___Conformance___Skipped_Proxy___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 28.9s

_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_as_non-root_with_FSGroup__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 29.2s

_sig-api-machinery__Servers_with_support_for_API_chunking_should_return_chunks_of_results_for_list_calls__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 21.6s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:52.658: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:52.286: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:51.892: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:51.498: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Volume_Placement__Feature_vsphere__should_create_and_delete_pod_with_the_same_volume_source_attach/detach_to_different_worker_nodes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-placement
Oct 13 09:26:50.920: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:26:51.111419 1005939 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:26:51.111: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55
Oct 13 09:26:51.114: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-placement-465" for this suite.
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:50.411: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:50.047: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:49.695: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:49.358: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSIServiceAccountToken_token_should_not_be_plumbed_down_when_csiServiceAccountTokenEnabled=false__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 143.0s

_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_incompatible_zone_along_with_compatible_storagePolicy_and_datastore_combination_specified_in_storage_class_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:26:46.357: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:26:46.561481 1005769 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:26:46.561: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:26:46.565: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-7844" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:45.760: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:45.394: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:45.079: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:44.757: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Servers_with_support_for_Table_transformation_should_return_chunks_of_table_results_for_list_calls__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__PersistentVolumes-local___Volume_type__block__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:26:21.918: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Downward_API_volume_should_provide_podname_only__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-storage__Downward_API_volume_should_provide_container's_memory_request__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 32.9s

_sig-apps__DisruptionController_should_update/patch_PodDisruptionBudget_status__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 28.9s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:50.997: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:50.560: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 140.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:49.623: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:49.310: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:48.998: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:48.667: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:48.275: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:47.939: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_contain_last_line_of_the_log__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:40.426: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:40.044: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:39.633: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___late-binding___ephemeral_should_create_read/write_inline_ephemeral_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 151.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:27.446: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSI_online_volume_expansion_should_expand_volume_without_restarting_pod_if_attach=on,_nodeExpansion=on__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 131.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:22.157: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-node__Probing_container_should__not__be_restarted_with_a_non-local_redirect_http_liveness_probe__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 284.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 70.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:12.260: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:11.882: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:11.879: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:11.506: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:11.125: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 38.2s

_sig-network__Services_should_provide_secure_master_service___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:25:00.010: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSI_workload_information_using_mock_driver_should_not_be_passed_when_podInfoOnMount=false__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 122.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.2s

_sig-network__IngressClass_API__should_support_creating_IngressClass_API_operations__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.1s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:58.440: INFO: Driver cinder doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:58.164: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__Garbage_collector_should_orphan_RS_created_by_deployment_when_deleteOptions.PropagationPolicy_is_Orphan__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.9s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:57.803: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:57.639: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__client-go_should_negotiate_watch_and_report_errors_with_accept__application/json___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:57.303: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Projected_secret_optional_updates_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 28.9s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:41.766: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:41.398: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:41.017: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:40.634: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "nfs" does not support populate data from snapshot - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:24:40.005: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:24:40.179124 1001326 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:24:40.179: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:201
Oct 13 09:24:40.186: INFO: Driver "nfs" does not support populate data from snapshot - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-3149" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "nfs" does not support populate data from snapshot - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:39.331: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:38.920: INFO: Driver cinder doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping

Stderr
_sig-storage__Downward_API_volume_should_provide_container's_cpu_request__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 56.3s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:25.491: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__ConfigMap_should_be_consumable_via_the_environment__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:25.000: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:24.870: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:24.612: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:24.433: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:24.100: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:23.753: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_priority_class_scope__quota_set_to_pod_count__1__against_2_pods_with_same_priority_class.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 6.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:16.457: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:16.059: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:09.301: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:08.922: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:08.589: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:08.207: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:07.733: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:24:07.338: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Probing_container_should__not__be_restarted_with_a_/healthz_http_liveness_probe__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 267.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 34.3s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:49.862: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV_with_storage_policy_specified_in_storage_class_in_waitForFirstConsumer_binding_mode_with_allowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:23:49.320: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:23:49.477592  999141 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:23:49.477: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:23:49.481: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-2113" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:48.741: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:48.350: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:48.003: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Pods_should_run_through_the_lifecycle_of_Pods_and_PodStatus__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.6s

_sig-storage__Volume_Placement__Feature_vsphere__should_create_and_delete_pod_with_multiple_volumes_from_same_datastore__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-placement
Oct 13 09:23:46.454: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:23:46.774389  999040 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:23:46.774: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55
Oct 13 09:23:46.783: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-placement-2644" for this suite.
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:45.761: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:45.337: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:44.989: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:44.650: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__DisruptionController_should_create_a_PodDisruptionBudget__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:43.372: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__CSI_mock_volume_storage_capacity_exhausted,_immediate_binding__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 129.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:41.142: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_the_allowed_zones_and_datastore_specified_in_storage_class_when_there_are_multiple_datastores_with_the_same_name_under_different_zones_across_datacenters__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:23:40.610: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:23:40.792123  998689 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:23:40.792: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:23:40.796: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-5422" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_valid_hostFailuresToTolerate_and_cacheReservation_values_is_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:23:39.836: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:23:40.002602  998674 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:23:40.002: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:23:40.006: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-3682" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:39.329: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:38.962: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.9s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:33.734: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:33.391: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:33.039: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:32.553: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Pods_should_delete_a_collection_of_pods__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 61.0s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/common/node/pods.go:884]: found a pod(s)
Unexpected error:
    <*errors.errorString | 0xc0002fcad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-node] Pods
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pods
Oct 13 09:23:27.398: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:23:27.580386  998258 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:23:27.580: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Pods
  k8s.io/kubernetes@v1.22.1/test/e2e/common/node/pods.go:188
[It] should delete a collection of pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:630
STEP: Create set of pods
Oct 13 09:23:27.610: INFO: created test-pod-1
Oct 13 09:23:27.641: INFO: created test-pod-2
Oct 13 09:23:27.674: INFO: created test-pod-3
STEP: waiting for all 3 pods to be located
STEP: waiting for all pods to be deleted
Oct 13 09:23:27.796: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:28.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:29.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:30.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:31.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:32.812: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:33.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:34.811: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:35.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:36.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:37.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:38.806: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:39.805: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:40.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:41.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:42.809: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:43.816: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:44.811: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:45.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:46.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:47.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:48.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:49.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:50.809: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:51.801: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:52.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:53.809: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:54.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:55.805: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:56.806: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:57.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:58.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:23:59.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:00.813: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:01.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:02.814: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:03.813: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:04.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:05.806: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:06.827: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:07.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:08.816: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:09.813: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:10.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:11.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:12.805: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:13.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:14.806: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:15.805: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:16.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:17.803: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:18.801: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:19.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:20.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:21.807: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:22.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:23.809: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:24.802: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:25.821: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:26.804: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:27.801: INFO: Pod quantity 3 is different from expected quantity 0
Oct 13 09:24:27.809: INFO: Pod quantity 3 is different from expected quantity 0
[AfterEach] [sig-node] Pods
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "e2e-pods-666".
STEP: Found 3 events.
Oct 13 09:24:27.813: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-pod-1: { } Scheduled: Successfully assigned e2e-pods-666/test-pod-1 to ostest-n5rnf-worker-0-94fxs
Oct 13 09:24:27.813: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-pod-2: { } Scheduled: Successfully assigned e2e-pods-666/test-pod-2 to ostest-n5rnf-worker-0-94fxs
Oct 13 09:24:27.813: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-pod-3: { } Scheduled: Successfully assigned e2e-pods-666/test-pod-3 to ostest-n5rnf-worker-0-j4pkp
Oct 13 09:24:27.817: INFO: POD         NODE                         PHASE    GRACE  CONDITIONS
Oct 13 09:24:27.817: INFO: test-pod-1  ostest-n5rnf-worker-0-94fxs  Pending  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC  }]
Oct 13 09:24:27.817: INFO: test-pod-2  ostest-n5rnf-worker-0-94fxs  Pending  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC  }]
Oct 13 09:24:27.817: INFO: test-pod-3  ostest-n5rnf-worker-0-j4pkp  Pending  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC  }]
Oct 13 09:24:27.817: INFO: 
Oct 13 09:24:27.823: INFO: skipping dumping cluster info - cluster too large
STEP: Destroying namespace "e2e-pods-666" for this suite.
fail [k8s.io/kubernetes@v1.22.1/test/e2e/common/node/pods.go:884]: found a pod(s)
Unexpected error:
    <*errors.errorString | 0xc0002fcad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stderr
_sig-storage__vcp_at_scale__Feature_vsphere___vsphere_scale_tests__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_scale.go:76]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] vcp at scale [Feature:vsphere] 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename vcp-at-scale
Oct 13 09:23:26.603: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:23:26.839870  998244 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:23:26.839: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp at scale [Feature:vsphere] 
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_scale.go:75
Oct 13 09:23:26.845: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] vcp at scale [Feature:vsphere] 
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-vcp-at-scale-635" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_scale.go:76]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:23:25.923: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Security_Context_when_creating_containers_with_AllowPrivilegeEscalation_should_allow_privilege_escalation_when_true__LinuxOnly___NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_with_defaultMode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 43.2s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:55.457: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:55.027: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 48.7s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:54.191: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:53.800: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-bindmounted__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 25.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:48.549: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:48.151: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:47.733: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:47.311: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:46.846: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_prestop_exec_hook_properly__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 46.9s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:45.296: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_Volume_Snapshots__Feature_VolumeSnapshotDataSource__volumesnapshotcontent_and_pvc_in_Bound_state_with_deletion_timestamp_set_should_not_get_deleted_while_snapshot_finalizer_exists__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 170.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:37.595: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:37.274: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:36.939: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 94.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:33.269: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:32.917: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:32.542: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_podname_as_non-root_with_fsgroup_and_defaultMode__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.2s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:21.489: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:21.183: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_when_FSGroup_is_specified__LinuxOnly___NodeFeature_FSGroup__nonexistent_volume_subPath_should_have_the_correct_mode_and_owner_using_FSGroup__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:20.844: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:20.480: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:20.484: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.5s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:11.437: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:11.122: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:10.758: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:10.445: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:10.098: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_a_VSAN_capability,_datastore_and_compatible_zone_specified_in_storage_class__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:22:09.591: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:22:09.782569  994874 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:22:09.782: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:22:09.786: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-8117" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Security_Context_should_support_pod.Spec.SecurityContext.SupplementalGroups__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 39.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:07.340: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:07.014: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:06.710: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:06.357: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:06.056: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:05.755: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:05.375: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:05.045: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:22:04.680: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__PersistentVolumes_NFS_with_Single_PV_-_PVC_pairs_should_create_a_non-pre-bound_PV_and_PVC__test_write_access___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.3s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:47.307: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_as_non-root__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.2s

_sig-cli__Kubectl_client_Update_Demo_should_create_and_stop_a_replication_controller___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 27.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:41.746: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:41.378: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:41.024: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__CronJob_should_support_CronJob_API_operations__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-api-machinery__Garbage_collector_should_support_orphan_deletion_of_custom_resources__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:36.242: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:35.889: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__CustomResourceDefinition_Watch__Privileged_ClusterAdmin__CustomResourceDefinition_Watch_watch_on_custom_resource_definition_objects__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 64.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:28.110: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:27.741: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:27.406: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSI_FSGroupPolicy__LinuxOnly__should_modify_fsGroup_if_fsGroupPolicy=File__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 287.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:13.263: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:12.933: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:12.611: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:21:12.199: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-node__Pods_should_get_a_host_IP__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 42.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:56.789: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:56.440: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:56.083: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:55.605: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__ReplicationController_should_serve_a_basic_image_on_each_replica_with_a_private_image__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:70]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-apps] ReplicationController
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename replication-controller
Oct 13 09:20:54.874: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:20:55.110538  991531 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:20:55.110: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] ReplicationController
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:54
[It] should serve a basic image on each replica with a private image [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:68
Oct 13 09:20:55.127: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-apps] ReplicationController
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-replication-controller-107" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:70]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:54.372: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link-bindmounted__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:46.807: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_mock_volume_storage_capacity_exhausted,_late_binding,_no_topology__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 267.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:44.541: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Probing_container_should_be_restarted_with_a_/healthz_http_liveness_probe__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:43.917: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:43.597: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 97.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:27.757: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:27.342: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:20:26.953: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_provide_basic_identity__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 181.0s

_sig-cli__Kubectl_client_Kubectl_expose_should_create_services_for_rc___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 30.9s

_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_invalid_diskStripes_value_is_not_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:20:22.984: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:20:23.142097  989916 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:20:23.142: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:20:23.150: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-5681" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 54.2s

_sig-apps__CronJob_should_remove_from_active_list_jobs_that_have_been_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 262.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:44.099: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:43.734: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:43.376: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 09:19:42.827: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:19:43.005552  988414 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:19:43.005: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 09:19:43.010: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-650" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:42.262: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:41.916: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 66.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:40.954: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:40.609: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:40.246: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__HostPath_should_give_a_volume_the_correct_mode__LinuxOnly___NodeConformance___Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.4s

_sig-node__Container_Runtime_blackbox_test_when_starting_a_container_that_exits_should_run_with_the_expected_status__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 95.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:36.875: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_be_able_to_deny_attaching_pod__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 68.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 133.0s

_sig-api-machinery__Watchers_should_observe_an_object_deletion_if_it_stops_meeting_the_requirements_of_the_selector__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 11.1s

_sig-api-machinery__Watchers_should_receive_events_on_concurrent_watches_in_same_order__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 6.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:19:27.954: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_when_FSGroup_is_specified__LinuxOnly___NodeFeature_FSGroup__files_with_FSGroup_ownership_should_support__root,0644,tmpfs___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:52.899: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:52.471: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.2s

_sig-network__DNS_should_provide_/etc/hosts_entries_for_the_cluster__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 64.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:46.020: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:45.600: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 103.0s

_sig-storage__PersistentVolumes-local___Volume_type__dir-link-bindmounted__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 58.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:37.511: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:37.147: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:36.792: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:36.428: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:36.042: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:35.714: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:35.272: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Volume_Provisioning_On_Clustered_Datastore__Feature_vsphere__verify_dynamic_provision_with_default_parameter_on_clustered_datastore__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-provision
Oct 13 09:18:34.585: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:18:34.793410  985624 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:18:34.793: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:52
Oct 13 09:18:34.799: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-provision-3709" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:33.952: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:33.570: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__ReplicationController_should_test_the_lifecycle_of_a_ReplicationController__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 58.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:29.375: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:29.015: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__server_version_should_find_the_server_version__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.7s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:27.983: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:27.548: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:25.851: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Volume_Disk_Size__Feature_vsphere__verify_dynamically_provisioned_pv_has_size_rounded_up_correctly__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_disksize.go:56]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Disk Size [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-disksize
Oct 13 09:18:25.253: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:18:25.385001  985071 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:18:25.385: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Disk Size [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_disksize.go:55
Oct 13 09:18:25.394: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Disk Size [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-disksize-2206" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_disksize.go:56]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:24.728: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:24.375: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:24.027: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:23.621: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_should_support__non-root,0666,default___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 30.9s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "cinder" does not support populate data from snapshot - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:18:20.966: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:18:21.110932  984981 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:18:21.111: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:201
Oct 13 09:18:21.116: INFO: Driver "cinder" does not support populate data from snapshot - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-4681" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "cinder" does not support populate data from snapshot - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:20.389: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:20.038: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_FSGroupPolicy__LinuxOnly__should_not_modify_fsGroup_if_fsGroupPolicy=None__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 198.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:17.474: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:18:17.130: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-node__Security_Context_should_support_seccomp_default_which_is_unconfined__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.9s

_sig-storage__Volumes_NFSv3_should_be_mountable_for_NFSv3__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 61.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:43.276: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link__Two_pods_mounting_a_local_volume_one_after_the_other_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:43.115: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:42.805: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:42.648: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "hostPath" does not support exec - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume
Oct 13 09:17:42.140: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:17:42.355699  983542 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:17:42.355: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:196
Oct 13 09:17:42.360: INFO: Driver "hostPath" does not support exec - skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-7832" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "hostPath" does not support exec - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:41.440: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings_as_non-root_with_FSGroup__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 39.1s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:40.582: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:40.211: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:39.883: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:39.516: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:39.134: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:38.740: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:38.304: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:37.848: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Subpath_Container_restart_should_verify_that_container_can_restart_successfully_after_configmaps_modified__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 125.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:36.274: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:35.914: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:35.553: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext3 -- skipping

Stderr
_sig-storage__EmptyDir_volumes_should_support__non-root,0777,default___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:25.582: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:25.145: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:24.781: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ResourceQuota__Feature_ScopeSelectors__should_verify_ResourceQuota_with_best_effort_scope_using_scope-selectors.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 17.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:20.452: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:20.080: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:19.609: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:19.213: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-node__Container_Runtime_blackbox_test_on_terminated_container_should_report_termination_message__LinuxOnly__as_empty_when_pod_succeeds_and_TerminationMessagePolicy_FallbackToLogsOnError_is_set__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 41.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:15.005: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-auth___Feature_NodeAuthorizer__Getting_an_existing_configmap_should_exit_with_the_Forbidden_error__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:13.782: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:13.452: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:13.118: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 78.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:05.397: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:04.913: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:04.543: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-api-machinery__ServerSideApply_should_not_remove_a_field_if_an_owner_unsets_the_field_but_other_managers_still_have_ownership_of_the_field__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:03.153: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:02.734: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__listing_validating_webhooks_should_work__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.3s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:02.332: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.9s

_sig-storage__Flexvolumes_should_be_mountable_when_attachable__Feature_Flexvolumes___Skipped_gce___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:170]: Only supported for providers [gce local] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Flexvolumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename flexvolume
Oct 13 09:17:01.572: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:17:01.772542  981573 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:17:01.772: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Flexvolumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:169
Oct 13 09:17:01.781: INFO: Only supported for providers [gce local] (not openstack)
[AfterEach] [sig-storage] Flexvolumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-flexvolume-6980" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:170]: Only supported for providers [gce local] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:00.958: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:00.524: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:17:00.094: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "cinder" does not define supported mount option - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:16:59.501: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:16:59.652550  981522 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:16:59.652: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180
Oct 13 09:16:59.658: INFO: Driver "cinder" does not define supported mount option - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-7040" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "cinder" does not define supported mount option - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:59.000: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 09:16:58.369: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:16:58.562736  981496 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:16:58.562: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 09:16:58.578: INFO: Driver "nfs" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-7951" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:57.829: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:57.413: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:56.940: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:56.476: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_list,_patch_and_delete_a_collection_of_StatefulSets__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 101.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:52.125: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:51.756: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:51.435: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:51.073: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:50.766: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:50.439: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:50.092: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:49.735: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:49.315: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:48.887: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:48.427: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:48.088: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:47.710: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-instrumentation__Events_should_delete_a_collection_of_events__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__EmptyDir_volumes_should_support__non-root,0666,tmpfs___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.3s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 09:16:46.222: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:16:46.489589  980559 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:16:46.489: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 09:16:46.494: INFO: Driver "nfs" does not provide raw block - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-1623" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:45.742: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__Pods_should_support_pod_readiness_gates__NodeFeature_PodReadinessGate___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 57.1s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:45.460: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:45.169: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:45.114: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:44.856: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:44.374: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__Services_should_create_endpoints_for_unready_pods__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 228.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:37.212: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:36.820: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-apps__Deployment_RollingUpdateDeployment_should_delete_old_pods_and_create_new_ones__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 58.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:26.446: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:26.077: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_valid_diskStripes_and_objectSpaceReservation_values_and_a_VSAN_datastore_is_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:16:25.402: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:16:25.572011  980034 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:16:25.572: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:16:25.576: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-9267" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_persistent_volume_claim__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 11.9s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:24.790: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:24.473: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__blockfswithformat__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.6s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 56.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:21.873: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:21.526: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.8s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:20.033: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:19.712: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:19.401: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:19.074: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__ConfigMap_updates_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 26.9s

_sig-storage__EmptyDir_volumes_should_support__root,0777,tmpfs___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.2s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:14.633: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_multiple_zones_specified_in_the_storage_class.__No_shared_datastores_exist_among_both_zones___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:16:14.121: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:16:14.294983  979540 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:16:14.295: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:16:14.302: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-7263" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "csi-hostpath" does not define supported mount option - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:16:13.464: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:16:13.625412  979528 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:16:13.625: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180
Oct 13 09:16:13.629: INFO: Driver "csi-hostpath" does not define supported mount option - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-6342" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "csi-hostpath" does not define supported mount option - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:12.987: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:12.643: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_support_exec_using_resource/name__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:09.960: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:16:09.659: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-apps__Job_should_fail_to_exceed_backoffLimit__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 48.8s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:55.853: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-node__Security_Context_When_creating_a_container_with_runAsNonRoot_should_not_run_with_an_explicit_root_user_ID__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.8s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:33.822: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:33.412: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:33.084: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-apps__ReplicaSet_should_adopt_matching_pods_on_creation_and_release_no_longer_matching_pods__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 48.9s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:32.245: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:31.821: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:31.371: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:30.994: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__Secrets_should_be_able_to_mount_in_a_volume_regardless_of_a_different_secret_existing_with_same_name_in_different_namespace__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.4s

_sig-node__Probing_container_with_readiness_probe_should_not_be_ready_before_initial_delay_and_never_restart__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:19.361: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:18.978: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:18.650: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__PersistentVolumes_NFS_with_multiple_PVs_and_PVCs_all_in_same_ns_should_create_2_PVs_and_4_PVCs__test_write_access__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 88.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:16.106: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:15.768: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:15.346: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:14.877: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:14.423: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:14.039: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__ReplicationController_should_release_no_longer_matching_pods__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 6.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:11.352: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:10.993: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Watchers_should_be_able_to_restart_watching_from_the_last_resource_version_observed_by_the_previous_watch__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:09.725: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:09.286: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 61.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:08.886: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:15:08.497: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-network__Proxy_version_v1_should_proxy_through_a_service_and_a_pod___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 30.7s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:57.925: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:57.457: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:57.062: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__PreStop_should_call_prestop_when_killing_a_pod___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__PersistentVolumes-local___Volume_type__tmpfs__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:52.523: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_provide_container's_memory_limit__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:40.653: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__Job_should_fail_when_exceeds_active_deadline__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 2.7s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:37.570: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ServerSideApply_should_work_for_subresources__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:36.114: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:35.790: INFO: Driver hostPath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ntfs -- skipping

Stderr
_sig-storage__Projected_secret_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_Mode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.1s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:23.067: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__InitContainer__NodeConformance__should_not_start_app_containers_if_init_containers_fail_on_a_RestartAlways_pod__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:09.241: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:08.925: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:08.636: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-network__EndpointSlice_should_have_Endpoints_and_EndpointSlices_pointing_to_API_Server__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:07.275: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:06.946: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:06.689: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:14:06.398: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 168.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 55.5s

_sig-network__EndpointSlice_should_create_and_delete_Endpoints_and_EndpointSlices_for_a_Service_with_a_selector_specified__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 5.1s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:22.073: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_CRD_preserving_unknown_fields_at_the_schema_root__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 76.0s

_sig-auth__ServiceAccounts_should_set_ownership_and_permission_when_RunAsUser_or_FsGroup_is_present__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 94.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:19.558: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSIStorageCapacity_CSIStorageCapacity_used,_insufficient_capacity__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 110.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:18.766: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:18.365: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__Volume_limits_should_verify_that_all_nodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/volume_limits.go:36]: Only supported for providers [aws gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume limits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-limits-on-node
Oct 13 09:13:17.737: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:13:17.933304  972993 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:13:17.933: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume limits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/volume_limits.go:35
Oct 13 09:13:17.943: INFO: Only supported for providers [aws gce gke] (not openstack)
[AfterEach] [sig-storage] Volume limits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-limits-on-node-4350" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/volume_limits.go:36]: Only supported for providers [aws gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:17.141: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_non-vsan_datastore_is_not_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:13:16.614: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:13:16.800841  972969 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:13:16.800: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:13:16.805: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-322" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_pod.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 14.2s

_sig-node__Variable_Expansion_should_allow_substituting_values_in_a_volume_subpath__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.4s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:04.995: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:04.845: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:04.480: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:13:04.164: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Secrets_should_patch_a_secret__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.4s

_sig-node__Secrets_should_be_consumable_from_pods_in_env_vars__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.3s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:48.499: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:48.149: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:47.810: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-network__Ingress_API_should_support_creating_Ingress_API_operations__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.1s

_sig-network__Proxy_version_v1_should_proxy_logs_on_node_using_proxy_subresource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.7s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:44.607: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 118.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:23.548: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:23.149: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:22.821: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:22.438: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PVC_Protection_Verify_that_PVC_in_active_use_by_a_pod_is_not_removed_immediately__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 49.1s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:14.739: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_implement_legacy_replacement_when_the_update_strategy_is_OnDelete__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 112.0s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:557]: Oct 13 09:13:55.642: Failed to delete stateful pod ss2-1 for StatefulSet e2e-statefulset-7447/ss2: pods "ss2-1" not found

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-apps] StatefulSet
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename statefulset
Oct 13 09:12:14.731: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:12:15.364700  970187 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:12:15.364: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:92
[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic]
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:107
STEP: Creating service test in namespace e2e-statefulset-7447
[It] should implement legacy replacement when the update strategy is OnDelete [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:503
STEP: Creating a new StatefulSet
Oct 13 09:12:15.456: INFO: Found 0 stateful pods, waiting for 3
Oct 13 09:12:25.473: INFO: Found 1 stateful pods, waiting for 3
Oct 13 09:12:35.465: INFO: Found 1 stateful pods, waiting for 3
Oct 13 09:12:45.469: INFO: Found 1 stateful pods, waiting for 3
Oct 13 09:12:55.491: INFO: Found 1 stateful pods, waiting for 3
Oct 13 09:13:05.469: INFO: Found 1 stateful pods, waiting for 3
Oct 13 09:13:15.461: INFO: Found 1 stateful pods, waiting for 3
Oct 13 09:13:25.463: INFO: Found 2 stateful pods, waiting for 3
Oct 13 09:13:35.462: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 13 09:13:35.462: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 13 09:13:35.462: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 13 09:13:45.461: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 13 09:13:45.461: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 13 09:13:45.461: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Restoring Pods to the current revision
Oct 13 09:13:45.551: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 13 09:13:45.551: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 13 09:13:45.551: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj to quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm
Oct 13 09:13:45.599: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Recreating Pods at the new revision
Oct 13 09:13:55.642: FAIL: Failed to delete stateful pod ss2-1 for StatefulSet e2e-statefulset-7447/ss2: pods "ss2-1" not found

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func9.2.9()
	k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:557 +0xdee
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0018dce68)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7fc73e8b4fff)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00293b1d0, 0xc0018dd230, {0x83433a0, 0xc000388900})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00293b1d0, {0x83433a0, 0xc000388900})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000e46dc0, 0xc00293b1d0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000e46dc0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000e46dc0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000374780, {0x8343660, 0xc0021c4e10}, {0x0, 0x0}, {0xc000a82070, 0x1, 0x1}, {0x843fe58, 0xc000388900}, ...)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc001ec1590, {0xc000b372d0, 0xb8fc7b0, 0x457d780})
	github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be
main.newRunTestCommand.func1.1()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32
github.com/openshift/origin/test/extended/util.WithCleanup(0xc00193fc18)
	github.com/openshift/origin/test/extended/util/test.go:168 +0xad
main.newRunTestCommand.func1(0xc001eea780, {0xc000b372d0, 0x1, 0x1})
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a
github.com/spf13/cobra.(*Command).execute(0xc001eea780, {0xc000b372a0, 0x1, 0x1})
	github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0xc001831b80)
	github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.1.3/command.go:897
main.main.func1(0xc000531f00)
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a
main.main()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:118
Oct 13 09:13:55.654: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-statefulset-7447 describe po ss2-0'
Oct 13 09:13:55.826: INFO: stderr: ""
Oct 13 09:13:55.826: INFO: stdout: "Name:                      ss2-0\nNamespace:                 e2e-statefulset-7447\nPriority:                  0\nNode:                      ostest-n5rnf-worker-0-8kq82/10.196.2.72\nStart Time:                Thu, 13 Oct 2022 09:13:48 +0000\nLabels:                    baz=blah\n                           controller-revision-hash=ss2-77bddb779c\n                           foo=bar\n                           statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations:               k8s.v1.cni.cncf.io/network-status:\n                             [{\n                                 \"name\": \"kuryr\",\n                                 \"interface\": \"eth0\",\n                                 \"ips\": [\n                                     \"10.128.179.60\"\n                                 ],\n                                 \"mac\": \"fa:16:3e:30:2b:46\",\n                                 \"default\": true,\n                                 \"dns\": {}\n                             }]\n                           k8s.v1.cni.cncf.io/networks-status:\n                             [{\n                                 \"name\": \"kuryr\",\n                                 \"interface\": \"eth0\",\n                                 \"ips\": [\n                                     \"10.128.179.60\"\n                                 ],\n                                 \"mac\": \"fa:16:3e:30:2b:46\",\n                                 \"default\": true,\n                                 \"dns\": {}\n                             }]\n                           openshift.io/scc: anyuid\nStatus:                    Terminating (lasts 0s)\nTermination Grace Period:  0s\nIP:                        \nIPs:                       <none>\nControlled By:             StatefulSet/ss2\nContainers:\n  webserver:\n    Container ID:   \n    Image:          quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm\n    Image ID:       \n    Port:           <none>\n    Host Port:      <none>\n    State:          Waiting\n      Reason:       ContainerCreating\n    Ready:          False\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwr4k (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             False \n  ContainersReady   False \n  PodScheduled      True \nVolumes:\n  kube-api-access-rwr4k:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\n    ConfigMapName:           openshift-service-ca.crt\n    ConfigMapOptional:       <nil>\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason          Age   From               Message\n  ----    ------          ----  ----               -------\n  Normal  Scheduled       7s    default-scheduler  Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-8kq82\n  Normal  AddedInterface  3s    multus             Add eth0 [10.128.179.60/23] from kuryr\n  Normal  Pulling         3s    kubelet            Pulling image \"quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm\"\n"
Oct 13 09:13:55.826: INFO: 
Output of kubectl describe ss2-0:
Name:                      ss2-0
Namespace:                 e2e-statefulset-7447
Priority:                  0
Node:                      ostest-n5rnf-worker-0-8kq82/10.196.2.72
Start Time:                Thu, 13 Oct 2022 09:13:48 +0000
Labels:                    baz=blah
                           controller-revision-hash=ss2-77bddb779c
                           foo=bar
                           statefulset.kubernetes.io/pod-name=ss2-0
Annotations:               k8s.v1.cni.cncf.io/network-status:
                             [{
                                 "name": "kuryr",
                                 "interface": "eth0",
                                 "ips": [
                                     "10.128.179.60"
                                 ],
                                 "mac": "fa:16:3e:30:2b:46",
                                 "default": true,
                                 "dns": {}
                             }]
                           k8s.v1.cni.cncf.io/networks-status:
                             [{
                                 "name": "kuryr",
                                 "interface": "eth0",
                                 "ips": [
                                     "10.128.179.60"
                                 ],
                                 "mac": "fa:16:3e:30:2b:46",
                                 "default": true,
                                 "dns": {}
                             }]
                           openshift.io/scc: anyuid
Status:                    Terminating (lasts 0s)
Termination Grace Period:  0s
IP:                        
IPs:                       <none>
Controlled By:             StatefulSet/ss2
Containers:
  webserver:
    Container ID:   
    Image:          quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwr4k (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-rwr4k:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       7s    default-scheduler  Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-8kq82
  Normal  AddedInterface  3s    multus             Add eth0 [10.128.179.60/23] from kuryr
  Normal  Pulling         3s    kubelet            Pulling image "quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm"

Oct 13 09:13:55.827: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-statefulset-7447 logs ss2-0 --tail=100'
Oct 13 09:13:55.994: INFO: rc: 1
Oct 13 09:13:55.994: INFO: 
Last 100 log lines of ss2-0:

Oct 13 09:13:55.994: INFO: Deleting all statefulset in ns e2e-statefulset-7447
Oct 13 09:13:55.998: INFO: Scaling statefulset ss2 to 0
Oct 13 09:14:06.023: INFO: Waiting for statefulset status.replicas updated to 0
Oct 13 09:14:06.027: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "e2e-statefulset-7447".
STEP: Found 34 events.
Oct 13 09:14:06.062: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-j4pkp
Oct 13 09:14:06.062: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-8kq82
Oct 13 09:14:06.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-1: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-1 to ostest-n5rnf-worker-0-8kq82
Oct 13 09:14:06.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-2: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-2 to ostest-n5rnf-worker-0-8kq82
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:12:15 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:04 +0000 UTC - event for ss2-0: {multus } AddedInterface: Add eth0 [10.128.179.227/23] from kuryr
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:04 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj"
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" in 10.243152958s
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" already present on machine
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {multus } AddedInterface: Add eth0 [10.128.179.60/23] from kuryr
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:33 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {multus } AddedInterface: Add eth0 [10.128.178.210/23] from kuryr
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" already present on machine
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:38 +0000 UTC - event for test: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint e2e-statefulset-7447/test: Operation cannot be fulfilled on endpoints "test": the object has been modified; please apply your changes to the latest version and try again
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Unhealthy: Readiness probe failed: Get "http://10.128.179.227:80/index.html": read tcp 10.196.0.199:45820->10.128.179.227:80: read: connection reset by peer
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "40fc3350-2465-40fb-ad19-e183eab52541" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_ss2-0_e2e-statefulset-7447_40fc3350-2465-40fb-ad19-e183eab52541_0(6d7eabfa390111c51f2d272b1725729ccf8e68ce430628bd0452724355514061): error removing pod e2e-statefulset-7447_ss2-0 from CNI network \"multus-cni-network\": delegateDel: error invoking ConflistDel - \"kuryr\": conflistDel: error in getting result from DelNetworkList: Looks like http://localhost:5036/delNetwork cannot be reached. Is kuryr-daemon running?; Post \"http://localhost:5036/delNetwork\": dial tcp [::1]:5036: connect: connection refused"
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Killing: Stopping container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Unhealthy: Readiness probe failed: Get "http://10.128.179.60:80/index.html": dial tcp 10.128.179.60:80: connect: connection refused
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:47 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Unhealthy: Readiness probe failed: Get "http://10.128.178.210:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:52 +0000 UTC - event for ss2-0: {multus } AddedInterface: Add eth0 [10.128.179.60/23] from kuryr
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:52 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm"
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:02 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:02 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm" in 10.523033128s
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:03 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container webserver
Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:03 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container webserver
Oct 13 09:14:06.066: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 13 09:14:06.066: INFO: 
Oct 13 09:14:06.073: INFO: skipping dumping cluster info - cluster too large
STEP: Destroying namespace "e2e-statefulset-7447" for this suite.
fail [k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:557]: Oct 13 09:13:55.642: Failed to delete stateful pod ss2-1 for StatefulSet e2e-statefulset-7447/ss2: pods "ss2-1" not found

Stderr
_sig-cli__Kubectl_client_Kubectl_describe_should_check_if_kubectl_describe_prints_relevant_information_for_cronjob__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:11.350: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__Deployment_should_run_the_lifecycle_of_a_Deployment__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 70.0s

_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_an_existing_and_compatible_SPBM_policy_is_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:12:10.205: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:12:10.979887  970120 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:12:10.979: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:12:10.983: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-6893" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:09.882: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:09.561: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ResourceQuota_should_be_able_to_update_and_delete_ResourceQuota.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:08.847: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:08.528: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-node__ConfigMap_should_update_ConfigMap_successfully__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:07.419: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:12:07.028: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-node__Security_Context_When_creating_a_container_with_runAsNonRoot_should_run_with_an_image_specified_user_ID__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 43.2s

_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_CRD_preserving_unknown_fields_in_an_embedded_object__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 77.0s

_sig-node__Probing_container_should__not__be_restarted_by_liveness_probe_because_startup_probe_delays_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 274.0s

_sig-node__Security_Context_should_support_pod.Spec.SecurityContext.RunAsUser__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_Snapshot__retain_policy___snapshottable_Feature_VolumeSnapshotDataSource__volume_snapshot_controller__should_check_snapshot_fields,_check_restore_correctly_works_after_modifying_source_data,_check_deletion__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 222.0s

_sig-storage__CSI_mock_volume_CSI_Volume_expansion_should_not_expand_volume_if_resizingOnDriver=off,_resizingOnSC=on__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 295.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:17.367: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:16.986: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:16.559: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:16.143: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_incompatible_datastore_and_zone_combination_specified_in_storage_class_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:11:15.536: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:11:15.719415  967995 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:11:15.719: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:11:15.724: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-2791" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:14.863: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:14.387: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_0.0.0.0_that_expects_NO_client_request_should_support_a_client_that_connects,_sends_DATA,_and_disconnects__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 48.9s

_sig-storage__PVC_Protection_Verify_that_scheduling_of_a_pod_that_uses_PVC_that_is_being_deleted_fails_and_the_pod_becomes_Unschedulable__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.1s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:11:06.515: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Volumes_NFSv4_should_be_mountable_for_NFSv4__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 68.0s

_sig-node__Downward_API_should_provide_host_IP_and_pod_IP_as_an_env_var_if_pod_uses_host_network__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 4.8s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:56.590: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:56.234: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Subpath_Atomic_writer_volumes_should_support_subpaths_with_configmap_pod_with_mountPath_of_existing_file__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:51.786: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_volume_on_tmpfs_should_have_the_correct_mode__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:44.008: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:43.630: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:43.210: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:42.863: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-apps__DisruptionController_should_observe_PodDisruptionBudget_status_updated__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:40.999: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:40.678: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:40.333: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:39.999: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:39.662: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:39.352: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:10:39.047: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping

Stderr
_sig-apps__Job_should_adopt_matching_orphans_and_release_non-matching_pods__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.7s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 68.0s

_sig-node___Feature_Example__Downward_API_should_create_a_pod_that_prints_his_name_and_namespace__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 45.6s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:53.143: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV_with_storage_policy_specified_in_storage_class_in_waitForFirstConsumer_binding_mode_with_multiple_allowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:09:52.631: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:09:52.794831  964905 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:09:52.794: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:09:52.803: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-7407" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__CustomResourceDefinition_resources__Privileged_ClusterAdmin__custom_resource_defaulting_for_requests_and_from_storage_works___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 4.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__delayed_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:47.903: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 62.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:40.773: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:40.397: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_deny_crd_creation__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 67.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:31.352: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 96.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:30.825: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 90.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:21.117: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_multiple_CRDs_of_same_group_but_different_versions__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 228.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:15.537: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_unconditionally_reject_operations_on_fail_closed_webhook__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 27.0s

_sig-storage__Projected_downwardAPI_should_set_DefaultMode_on_files__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 47.4s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:09:00.054: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-cli__Kubectl_client_Kubectl_patch_should_add_annotations_for_pods_in_rc___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 26.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:53.889: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.7s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 73.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:26.627: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Security_Context_should_support_pod.Spec.SecurityContext.RunAsUser_And_pod.Spec.SecurityContext.RunAsGroup__LinuxOnly___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 37.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:22.693: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:22.379: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:22.071: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:21.734: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:21.406: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:21.051: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:20.693: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings_as_non-root__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 29.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:19.420: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:19.067: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:18.727: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:18.337: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:17.957: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-cli__Kubectl_client_Simple_pod_should_return_command_exit_codes_execing_into_a_container_with_a_failing_command__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.7s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "cinder" does not define supported mount option - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:08:16.648: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:08:16.787277  961254 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:08:16.787: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180
Oct 13 09:08:16.790: INFO: Driver "cinder" does not define supported mount option - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-2188" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "cinder" does not define supported mount option - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:16.082: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:15.762: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:15.412: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:15.084: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 71.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:08:04.434: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-network__Services_should_be_rejected_when_no_endpoints_exist__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 67.0s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:2029]: Unexpected error:
    <*errors.errorString | 0xc0002fcad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Oct 13 09:07:47.194: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:07:47.347584  960519 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:07:47.347: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:749
[It] should be rejected when no endpoints exist [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1989
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node ostest-n5rnf-worker-0-8kq82
Oct 13 09:07:47.379: INFO: Creating new exec pod
Oct 13 09:08:13.436: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node ostest-n5rnf-worker-0-8kq82
Oct 13 09:08:13.436: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:16.745: INFO: rc: 1
Oct 13 09:08:16.745: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:18.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:22.188: INFO: rc: 1
Oct 13 09:08:22.188: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:22.747: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:26.077: INFO: rc: 1
Oct 13 09:08:26.077: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:26.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:30.077: INFO: rc: 1
Oct 13 09:08:30.077: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:30.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:34.061: INFO: rc: 1
Oct 13 09:08:34.061: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:34.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:38.043: INFO: rc: 1
Oct 13 09:08:38.043: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:38.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:42.059: INFO: rc: 1
Oct 13 09:08:42.059: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:42.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:46.041: INFO: rc: 1
Oct 13 09:08:46.041: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:46.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:50.115: INFO: rc: 1
Oct 13 09:08:50.115: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct 13 09:08:50.115: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 13 09:08:53.422: INFO: rc: 1
Oct 13 09:08:53.422: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "e2e-services-4369".
STEP: Found 5 events.
Oct 13 09:08:53.429: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-noendpointsrg62k: { } Scheduled: Successfully assigned e2e-services-4369/execpod-noendpointsrg62k to ostest-n5rnf-worker-0-8kq82
Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {multus } AddedInterface: Add eth0 [10.128.167.117/23] from kuryr
Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container agnhost-container
Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container agnhost-container
Oct 13 09:08:53.433: INFO: POD                       NODE                         PHASE    GRACE  CONDITIONS
Oct 13 09:08:53.434: INFO: execpod-noendpointsrg62k  ostest-n5rnf-worker-0-8kq82  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:07:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:08:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:08:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:07:47 +0000 UTC  }]
Oct 13 09:08:53.434: INFO: 
Oct 13 09:08:53.440: INFO: skipping dumping cluster info - cluster too large
STEP: Destroying namespace "e2e-services-4369" for this suite.
[AfterEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:753
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:2029]: Unexpected error:
    <*errors.errorString | 0xc0002fcad0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:46.608: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-apps__DisruptionController_evictions__enough_pods,_replicaSet,_percentage_=>_should_allow_an_eviction__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 159.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:39.309: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___immediate-binding___ephemeral_should_support_two_pods_which_share_the_same_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 270.0s

_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings_and_Item_mode_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.4s

_sig-storage__PersistentVolumes-local___Volume_type__dir-link-bindmounted__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 41.8s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.5s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 64.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___immediate-binding___ephemeral_should_create_read-only_inline_ephemeral_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 228.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:08.279: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:07.866: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:07.465: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:07.137: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:06.778: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:06.456: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:06.100: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-api-machinery__CustomResourceConversionWebhook__Privileged_ClusterAdmin__should_be_able_to_convert_from_CR_v1_to_CR_v2__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 27.8s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:07:00.262: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:59.928: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:59.574: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:59.240: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:58.909: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:58.609: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__RuntimeClass__should_support_RuntimeClasses_API_operations__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_replica_set.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 11.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:53.799: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:53.458: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_api-versions_should_check_if_v1_is_in_available_api_versions___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 47.1s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:52.152: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:51.882: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:51.805: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:51.505: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 09:06:50.776: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:06:50.946217  957963 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:06:50.946: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 09:06:50.959: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-6609" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 32.0s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:50.332: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:50.032: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:06:49.721: INFO: Driver "nfs" does not provide raw block - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping

Stderr
_sig-apps__ReplicationController_should_adopt_matching_pods_on_creation__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 23.7s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.1s

_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_mutate_configmap__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 39.1s

_sig-storage__CSI_mock_volume_CSIStorageCapacity_CSIStorageCapacity_used,_no_capacity__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 102.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:54.780: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:54.350: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-auth__ServiceAccounts_ServiceAccountIssuerDiscovery_should_support_OIDC_discovery_of_service_account_issuer__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 59.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:51.343: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:51.015: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__Projected_configMap_optional_updates_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 111.0s

_sig-auth__ServiceAccounts_should_allow_opting_out_of_API_token_automount___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:18.614: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Volume_FStype__Feature_vsphere__verify_fstype_-_default_value_should_be_ext4__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-fstype
Oct 13 09:05:17.924: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:05:18.204920  954842 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:05:18.205: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:75
Oct 13 09:05:18.226: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume FStype [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-fstype-6329" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:17.178: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:16.863: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:16.548: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:16.221: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:15.819: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:15.339: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:14.956: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_workload_information_using_mock_driver_should_be_passed_when_podInfoOnMount=true__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 429.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:05:05.255: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 214.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "cinder" does not support cloning - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 09:04:46.366: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:04:46.501624  953916 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:04:46.501: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:239
Oct 13 09:04:46.505: INFO: Driver "cinder" does not support cloning - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-5731" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "cinder" does not support cloning - skipping

Stderr
_sig-api-machinery__health_handlers_should_contain_necessary_checks__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.3s

_sig-storage__CSI_mock_volume_CSI_attach_test_using_mock_driver_should_preserve_attachment_policy_when_no_CSIDriver_present__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 130.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:39.479: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:39.162: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__Pod_Disks_should_be_able_to_delete_a_non-existent_PD_without_error__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:450]: Only supported for providers [gce] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Pod Disks
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pod-disks
Oct 13 09:04:38.625: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:04:38.776257  953096 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:04:38.776: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:74
[It] should be able to delete a non-existent PD without error [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:449
Oct 13 09:04:38.816: INFO: Only supported for providers [gce] (not openstack)
[AfterEach] [sig-storage] Pod Disks
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pod-disks-638" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:450]: Only supported for providers [gce] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:38.081: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Downward_API_volume_should_update_annotations_on_modification__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.4s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 67.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:07.508: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:07.200: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:06.840: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:06.460: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:06.090: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:04:05.703: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_pod_should_support_shared_volumes_between_containers__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 45.1s

_sig-storage__CSI_mock_volume_storage_capacity_exhausted,_late_binding,_with_topology__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 188.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:49.404: INFO: Driver cinder doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping

Stderr
_sig-apps__StatefulSet_Basic_StatefulSet_functionality__StatefulSetBasic__should_have_a_working_scale_subresource__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 291.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:35.159: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Projected_secret_should_be_consumable_from_pods_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 34.9s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.1s

_sig-apps__Job_should_not_create_pods_when_created_in_suspend_state__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 69.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:29.352: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:28.939: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:28.943: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:28.586: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__Downward_API_volume_should_set_DefaultMode_on_files__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:18.060: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:17.727: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:17.361: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:17.036: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:16.681: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 44.9s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:14.422: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-apps__ReplicationController_should_surface_a_failure_condition_on_a_common_issue_like_exceeded_quota__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 2.8s

_sig-apps__DisruptionController_evictions__too_few_pods,_absolute_=>_should_not_allow_an_eviction__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 18.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:11.533: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:11.198: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:10.850: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:10.515: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:03:10.190: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.4s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:56.873: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:56.493: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_PVC_creation_fails_when_multiple_zones_are_specified_in_the_storage_class_without_shared_datastores_among_the_zones_in_waitForFirstConsumer_binding_mode__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:02:55.980: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:02:56.104050  949413 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:02:56.104: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:02:56.111: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-6242" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__Servers_with_support_for_Table_transformation_should_return_pod_details__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:54.673: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-node__Container_Lifecycle_Hook_when_create_a_pod_with_lifecycle_hook_should_execute_prestop_http_hook_properly__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 48.9s

_sig-storage__Secrets_optional_updates_should_be_reflected_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 41.0s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:32.488: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:32.108: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 218.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:29.293: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:28.938: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:28.585: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_if_a_non-existing_SPBM_policy_is_not_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 09:02:28.011: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:02:28.208346  948341 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:02:28.208: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 09:02:28.220: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-426" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_workload_information_using_mock_driver_should_not_be_passed_when_podInfoOnMount=nil__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 210.0s

_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_ensure_its_status_is_promptly_calculated.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 7.8s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:02:19.749: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_volume_on_default_medium_should_have_the_correct_mode__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.1s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:54.046: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 57.5s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:48.419: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:47.977: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:47.597: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:47.247: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:46.870: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__Job_should_create_pods_for_an_Indexed_job_with_completion_indexes_and_specified_hostname__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 38.7s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:40.698: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:40.358: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:38.564: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__vcp-performance__Feature_vsphere__vcp_performance_tests__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_perf.go:70]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] vcp-performance [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename vcp-performance
Oct 13 09:01:38.045: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:01:38.209319  946240 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:01:38.209: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_perf.go:69
Oct 13 09:01:38.213: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] vcp-performance [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-vcp-performance-8258" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_perf.go:70]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__PersistentVolumes__Feature_vsphere__Feature_ReclaimPolicy__persistentvolumereclaim_vsphere__Feature_vsphere__should_delete_persistent_volume_when_reclaimPolicy_set_to_delete_and_associated_claim_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename persistentvolumereclaim
Oct 13 09:01:37.234: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:01:37.398552  946227 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:01:37.398: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:47
[BeforeEach] persistentvolumereclaim:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:54
Oct 13 09:01:37.405: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] persistentvolumereclaim:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:63
STEP: running testCleanupVSpherePersistentVolumeReclaim
[AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-persistentvolumereclaim-5708" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:36.731: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:36.368: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:36.039: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_get_componentstatuses_should_get_componentstatuses__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 2.0s

_sig-cli__Kubectl_client_Simple_pod_should_return_command_exit_codes_execing_into_a_container_with_a_successful_command__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.6s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:30.186: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:29.884: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Servers_with_support_for_Table_transformation_should_return_generic_metadata_details_across_all_namespaces_for_nodes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:28.755: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__PersistentVolumes_GCEPD_should_test_that_deleting_the_Namespace_of_a_PVC_and_Pod_causes_the_successful_detach_of_Persistent_Disk__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pv
Oct 13 09:01:28.129: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:01:28.298007  945842 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:01:28.298: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:77
Oct 13 09:01:28.309: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pv-8251" for this suite.
[AfterEach] [sig-storage] PersistentVolumes GCEPD
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:111
Oct 13 09:01:28.326: INFO: AfterEach: Cleaning up test resources
Oct 13 09:01:28.326: INFO: pvc is nil
Oct 13 09:01:28.326: INFO: pv is nil
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:27.487: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:27.115: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:26.740: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.6s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:26.370: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:25.982: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:25.545: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:25.159: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Volume_Disk_Format__Feature_vsphere__verify_disk_format_type_-_thin_is_honored_for_dynamically_provisioned_pv_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-disk-format
Oct 13 09:01:24.523: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:01:24.697253  945729 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:01:24.697: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:70
Oct 13 09:01:24.704: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-disk-format-8427" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-cli__Kubectl_client_kubectl_wait_should_ignore_not_found_error_with_--for=delete__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-cli__Kubectl_client_Kubectl_run_pod_should_create_a_pod_from_an_image_when_restart_is_Never___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 243.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:19.826: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 37.6s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:16.104: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:15.759: INFO: Driver hostPath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:15.443: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__block__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_write_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 32.9s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:13.631: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_incompatible_storagePolicy_and_zone_combination_specified_in_storage_class_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 09:01:13.102: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 09:01:13.262717  945114 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 09:01:13.262: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 09:01:13.268: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-3972" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:12.577: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:12.259: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__PVC_Protection_Verify__immediate__deletion_of_a_PVC_that_is_not_in_active_use_by_a_pod__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:06.867: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:06.477: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_update_annotations_on_modification__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.6s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:02.109: INFO: Driver "cinder" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:01:01.486: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__Garbage_collector_should_delete_jobs_and_pods_created_by_cronjob__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 11.8s

_sig-cli__Kubectl_Port_forwarding_With_a_server_listening_on_0.0.0.0_that_expects_a_client_request_should_support_a_client_that_connects,_sends_NO_DATA,_and_disconnects__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:48.453: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:48.134: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Pods_Extended_Delete_Grace_Period_should_be_submitted_and_removed__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 45.9s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:37.109: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:36.785: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:36.462: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:36.048: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:35.662: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:35.212: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:34.821: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:34.407: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__Services_should_test_the_lifecycle_of_an_Endpoint__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:33.071: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_container's_memory_request__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.2s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 09:00:16.992: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_provide_container's_cpu_limit__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 81.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 72.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 75.0s

_sig-node__Downward_API_should_provide_default_limits.cpu/memory_from_node_allocatable__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.1s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:52.694: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__Garbage_collector_should_orphan_pods_created_by_rc_if_delete_options_say_so__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 41.3s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___immediate-binding___ephemeral_should_support_multiple_inline_ephemeral_volumes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/ephemeral.go:224]: Multiple generic ephemeral volumes with immediate binding may cause pod startup failures when the volumes get created in separate topology segments.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename ephemeral
Oct 13 08:59:52.069: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:59:52.286358  941794 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:59:52.286: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support multiple inline ephemeral volumes [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/ephemeral.go:221
Oct 13 08:59:52.293: INFO: Multiple generic ephemeral volumes with immediate binding may cause pod startup failures when the volumes get created in separate topology segments.
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-ephemeral-3943" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/ephemeral.go:224]: Multiple generic ephemeral volumes with immediate binding may cause pod startup failures when the volumes get created in separate topology segments.

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 58.2s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:51.319: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:51.056: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:50.938: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:50.668: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_Snapshot__retain_policy___snapshottable_Feature_VolumeSnapshotDataSource__volume_snapshot_controller__should_check_snapshot_fields,_check_restore_correctly_works_after_modifying_source_data,_check_deletion__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 210.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:44.223: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:43.822: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__dir-link__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.2s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:34.353: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:34.020: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext3 -- skipping

Stderr
_sig-node__ConfigMap_should_be_consumable_via_environment_variable__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:31.928: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:31.569: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Volume_Placement__Feature_vsphere__test_back_to_back_pod_creation_and_deletion_with_different_volume_sources_on_the_same_worker_node__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-placement
Oct 13 08:59:30.920: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:59:31.074904  940969 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:59:31.074: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55
Oct 13 08:59:31.084: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-placement-2042" for this suite.
[AfterEach] [sig-storage] Volume Placement [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_Volume_expansion_should_expand_volume_by_restarting_pod_if_attach=off,_nodeExpansion=on__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 406.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:25.810: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___late-binding___ephemeral_should_support_two_pods_which_share_the_same_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 257.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:59:11.372: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Projected_downwardAPI_should_set_mode_on_item_file__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:57.241: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 54.7s

_sig-network__Services_should_complete_a_service_status_lifecycle__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-node__Probing_container_should_be_ready_immediately_after_startupProbe_succeeds__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:47.880: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:47.606: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:47.409: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:47.152: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-api-machinery__Server_request_timeout_default_timeout_should_be_used_if_the_specified_timeout_in_the_request_URL_is_0s__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:45.951: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:45.609: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver hostPath on volume type InlineVolume doesn't support readOnly source
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 08:58:45.054: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:58:45.238944  939144 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:58:45.239: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:395
Oct 13 08:58:45.242: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 13 08:58:45.251: INFO: Creating resource for inline volume
Oct 13 08:58:45.251: INFO: Driver hostPath on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct 13 08:58:45.251: INFO: Deleting pod "pod-subpath-test-inlinevolume-kfkk" in namespace "e2e-provisioning-3342"
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-3342" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver hostPath on volume type InlineVolume doesn't support readOnly source

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:44.603: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:44.139: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_create_quota_should_create_a_quota_without_scopes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_with_mappings_as_non-root_with_FSGroup__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.2s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:37.783: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__Deployment_deployment_should_support_rollover__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 80.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:37.348: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:36.905: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:36.583: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_create_quota_should_create_a_quota_with_scopes__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:35.424: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:35.068: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:34.655: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:34.331: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:33.974: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:33.644: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-scheduling__LimitRange_should_create_a_LimitRange_with_defaults_and_ensure_pod_has_those_defaults_applied.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 8.1s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 63.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:22.367: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_CRD_with_validation_schema__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 79.0s

_sig-storage__PersistentVolumes_vsphere__Feature_vsphere__should_test_that_deleting_the_Namespace_of_a_PVC_and_Pod_causes_the_successful_detach_of_vsphere_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename pv
Oct 13 08:58:14.346: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:58:14.525583  937754 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:58:14.525: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
Oct 13 08:58:14.531: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-pv-8721" for this suite.
[AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:112
Oct 13 08:58:14.548: INFO: AfterEach: Cleaning up test resources
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:13.678: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:13.276: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:58:12.924: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__ConfigMap_should_be_consumable_from_pods_in_volume_with_mappings__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.2s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.7s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:59.715: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__EmptyDir_volumes_should_support__root,0777,default___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:41.543: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:41.192: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__tmpfs__One_pod_requesting_one_prebound_PVC_should_be_able_to_mount_volume_and_read_from_pod1__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.0s

_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_is_created_and_attached_to_a_dynamically_created_PV,_based_on_multiple_zones_specified_in_storage_class___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 08:57:40.586: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:57:40.786090  936646 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:57:40.786: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 08:57:40.790: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-1575" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:39.959: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:39.807: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-api-machinery__Discovery_Custom_resource_should_have_storage_version_hash__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 3.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 76.0s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 80.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:27.307: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.6s

_sig-node__Kubelet_when_scheduling_a_busybox_Pod_with_hostAliases_should_write_entries_to_/etc/hosts__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 33.3s

_sig-auth___Feature_NodeAuthorizer__Getting_an_existing_secret_should_exit_with_the_Forbidden_error__Skipped_ibmcloud___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.1s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:25.975: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:25.970: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:57:25.595: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping

Stderr
_sig-storage__Zone_Support__Feature_vsphere__Verify_a_pod_fails_to_get_scheduled_when_conflicting_volume_topology__allowedTopologies__and_pod_scheduling_constraints_nodeSelector__are_specified__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 08:57:25.307: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:57:25.540730  936118 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:57:25.540: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 08:57:25.548: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-8779" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__ResourceQuota__Feature_ScopeSelectors__should_verify_ResourceQuota_with_terminating_scopes_through_scope_selectors.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 17.2s

_sig-storage__HostPath_should_support_r/w__NodeConformance___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 33.1s

_sig-storage__Secrets_should_be_consumable_from_pods_in_volume_as_non-root_with_defaultMode_and_fsGroup_set__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.4s

_sig-storage__CSI_mock_volume_CSI_Volume_expansion_should_expand_volume_by_restarting_pod_if_attach=on,_nodeExpansion=on__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 183.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:56:46.942: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-network__Services_should_preserve_source_pod_IP_for_traffic_thru_service_cluster_IP__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 189.0s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/util.go:133]: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
        },
        Code: 28,
    }
    error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip
    command terminated with exit code 28
    
    error:
    exit status 28
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Oct 13 08:56:43.103: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:56:43.240489  934378 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:56:43.240: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:749
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:924
Oct 13 08:56:43.288: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:45.296: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 13 08:56:45.300: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 13 08:56:45.613: INFO: rc: 7
Oct 13 08:56:45.645: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 13 08:56:45.654: INFO: Pod kube-proxy-mode-detector no longer exists
Oct 13 08:56:45.654: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace e2e-services-2073
Oct 13 08:56:45.683: INFO: sourceip-test cluster ip: 172.30.139.16
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Oct 13 08:56:45.736: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:47.791: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:49.747: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:51.751: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:53.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:55.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:57.749: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:56:59.746: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:01.745: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:03.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:05.743: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:07.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:09.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:11.748: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:57:13.742: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace e2e-services-2073 to expose endpoints map[echo-sourceip:[8080]]
Oct 13 08:57:13.761: INFO: successfully validated that service sourceip-test in namespace e2e-services-2073 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Oct 13 08:57:13.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 13 08:57:15.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:17.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:19.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:21.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:23.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:25.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:27.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:29.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:31.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:33.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:35.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:37.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:39.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 08:57:41.823: INFO: Waiting up to 2m0s to get response from 172.30.139.16:8080
Oct 13 08:57:41.824: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip'
Oct 13 08:58:12.198: INFO: rc: 28
Oct 13 08:58:12.199: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 13 08:58:14.200: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip'
Oct 13 08:58:44.601: INFO: rc: 28
Oct 13 08:58:44.601: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 13 08:58:46.602: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip'
Oct 13 08:59:16.982: INFO: rc: 28
Oct 13 08:59:16.982: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 13 08:59:18.982: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip'
Oct 13 08:59:49.249: INFO: rc: 28
Oct 13 08:59:49.249: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 13 08:59:51.250: INFO: Deleting deployment
Oct 13 08:59:51.299: INFO: Cleaning up the echo server pod
Oct 13 08:59:51.313: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "e2e-services-2073".
STEP: Found 24 events.
Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for echo-sourceip: { } Scheduled: Successfully assigned e2e-services-2073/echo-sourceip to ostest-n5rnf-worker-0-j4pkp
Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned e2e-services-2073/kube-proxy-mode-detector to ostest-n5rnf-worker-0-j4pkp
Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pause-pod-867d667966-q4kj2: { } Scheduled: Successfully assigned e2e-services-2073/pause-pod-867d667966-q4kj2 to ostest-n5rnf-worker-0-94fxs
Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pause-pod-867d667966-xpjcj: { } Scheduled: Successfully assigned e2e-services-2073/pause-pod-867d667966-xpjcj to ostest-n5rnf-worker-0-8kq82
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:43 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:43 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:43 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:46 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Killing: Stopping container agnhost-container
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {multus } AddedInterface: Add eth0 [10.128.164.254/23] from kuryr
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:13 +0000 UTC - event for pause-pod: {deployment-controller } ScalingReplicaSet: Scaled up replica set pause-pod-867d667966 to 2
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:13 +0000 UTC - event for pause-pod-867d667966: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-867d667966-xpjcj
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:13 +0000 UTC - event for pause-pod-867d667966: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-867d667966-q4kj2
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container agnhost-pause
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container agnhost-pause
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {multus } AddedInterface: Add eth0 [10.128.164.7/23] from kuryr
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:39 +0000 UTC - event for pause-pod-867d667966-q4kj2: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:39 +0000 UTC - event for pause-pod-867d667966-q4kj2: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container agnhost-pause
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:39 +0000 UTC - event for pause-pod-867d667966-q4kj2: {multus } AddedInterface: Add eth0 [10.128.164.95/23] from kuryr
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:40 +0000 UTC - event for pause-pod-867d667966-q4kj2: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container agnhost-pause
Oct 13 08:59:51.366: INFO: At 2022-10-13 08:59:51 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Killing: Stopping container agnhost-container
Oct 13 08:59:51.374: INFO: POD                         NODE                         PHASE    GRACE  CONDITIONS
Oct 13 08:59:51.374: INFO: echo-sourceip               ostest-n5rnf-worker-0-j4pkp  Running  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:56:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:56:45 +0000 UTC  }]
Oct 13 08:59:51.374: INFO: pause-pod-867d667966-q4kj2  ostest-n5rnf-worker-0-94fxs  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC  }]
Oct 13 08:59:51.374: INFO: pause-pod-867d667966-xpjcj  ostest-n5rnf-worker-0-8kq82  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC  }]
Oct 13 08:59:51.374: INFO: 
Oct 13 08:59:51.385: INFO: skipping dumping cluster info - cluster too large
STEP: Destroying namespace "e2e-services-2073" for this suite.
[AfterEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:753
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/util.go:133]: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
        },
        Code: 28,
    }
    error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip
    command terminated with exit code 28
    
    error:
    exit status 28
occurred

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Inline-volume__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:56:42.574: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__should_mutate_custom_resource_with_pruning__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 49.9s

_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 75.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:56:11.396: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__Projected_secret_should_be_consumable_in_multiple_volumes_in_a_pod__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 51.1s

_sig-cli__Kubectl_client_Kubectl_server-side_dry-run_should_check_if_kubectl_can_dry-run_update_Pods__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 242.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:56:00.675: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:56:00.326: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:59.987: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:59.616: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Inline-volume__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:59.180: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:58.775: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-node__Sysctls__LinuxOnly___NodeConformance__should_not_launch_unsafe,_but_not_explicitly_enabled_sysctls_on_the_node__MinimumKubeletVersion_1.21___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 2.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 51.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:54.726: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__filesystem_volmode___volumeLimits_should_verify_that_all_csinodes_have_volume_limits__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:54.353: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__EmptyDir_volumes_should_support__non-root,0644,tmpfs___LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 53.2s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:53.968: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Inline-volume__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:53.665: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:53.291: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__EndpointSlice_should_support_creating_EndpointSlice_API_operations__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:52.038: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:51.707: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:51.398: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-cli__Kubectl_client_Kubectl_diff_should_check_if_kubectl_diff_finds_a_difference_for_Deployments__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 3.5s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:47.470: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__ext3___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:47.041: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-apps__CronJob_should_delete_failed_finished_jobs_with_limit_of_one_job__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 99.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:43.077: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping

Stderr
_sig-cli__Kubectl_client_Update_Demo_should_scale_a_replication_controller___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 58.1s

_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_service.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 12.1s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:34.484: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Inline-volume__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver emptydir on volume type InlineVolume doesn't support readOnly source
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename provisioning
Oct 13 08:55:33.815: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:55:34.024399  931273 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:55:34.024: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:395
Oct 13 08:55:34.028: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 13 08:55:34.028: INFO: Creating resource for inline volume
Oct 13 08:55:34.028: INFO: Driver emptydir on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct 13 08:55:34.028: INFO: Deleting pod "pod-subpath-test-inlinevolume-5rmt" in namespace "e2e-provisioning-1713"
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-provisioning-1713" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver emptydir on volume type InlineVolume doesn't support readOnly source

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:33.272: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:32.951: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSIStorageCapacity_CSIStorageCapacity_used,_have_capacity__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 177.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Inline-volume__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:31.733: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_as_non-root_with_defaultMode_and_fsGroup_set__LinuxOnly___NodeFeature_FSGroup___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-api-machinery__API_priority_and_fairness_should_ensure_that_requests_can't_be_drowned_out__fairness___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:187]: skipping test until flakiness is resolved
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-api-machinery] API priority and fairness
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename apf
Oct 13 08:55:29.800: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:55:29.943375  930899 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:55:29.943: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that requests can't be drowned out (fairness) [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:185
[AfterEach] [sig-api-machinery] API priority and fairness
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-apf-6694" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:187]: skipping test until flakiness is resolved

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:29.264: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Volumes_ConfigMap_should_be_mountable__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 30.0s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:23.536: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:55:23.182: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

Stderr
_sig-storage__Projected_downwardAPI_should_provide_container's_cpu_request__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:57.938: INFO: Driver cinder doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:57.618: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:57.243: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:56.897: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-apps__DisruptionController_should_observe_that_the_PodDisruptionBudget_status_is_not_updated_for_unmanaged_pods__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 101.0s

_sig-apps__Job_should_remove_pods_when_job_is_deleted__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 86.0s

_sig-storage__Projected_configMap_should_be_consumable_from_pods_in_volume_as_non-root__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.4s

_sig-api-machinery__CustomResourcePublishOpenAPI__Privileged_ClusterAdmin__works_for_multiple_CRDs_of_same_group_and_version_but_different_kinds__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 144.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:18.030: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:17.693: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:17.374: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:17.040: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Generic_Ephemeral-volume__default_fs___immediate-binding___ephemeral_should_create_read/write_inline_ephemeral_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 195.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:12.154: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:11.747: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:11.363: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Pre-provisioned_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 94.0s

_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:09.130: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:08.794: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:08.449: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:08.128: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Pre-provisioned_PV__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:54:07.767: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__PersistentVolumes_NFS_with_Single_PV_-_PVC_pairs_create_a_PVC_and_a_pre-bound_PV__test_write_access__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 55.3s

_sig-storage__Projected_configMap_should_be_consumable_in_multiple_volumes_in_the_same_pod__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 31.0s

_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:53:58.797: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-node__InitContainer__NodeConformance__should_not_start_app_containers_and_fail_the_pod_if_init_containers_fail_on_a_RestartNever_pod__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 35.5s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.1s

_sig-api-machinery__ResourceQuota_should_verify_ResourceQuota_with_cross_namespace_pod_affinity_scope_using_scope-selectors.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 8.9s

_sig-storage__Storage_Policy_Based_Volume_Provisioning__Feature_vsphere__verify_VSAN_storage_capability_with_invalid_capability_name_objectSpaceReserve_is_not_honored_for_dynamically_provisioned_pvc_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-vsan-policy
Oct 13 08:53:34.274: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:53:34.448894  926898 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:53:34.448: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86
Oct 13 08:53:34.453: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-vsan-policy-1504" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__AdmissionWebhook__Privileged_ClusterAdmin__listing_mutating_webhooks_should_work__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 37.2s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:53:30.180: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.4s

_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_priority_class_scope__quota_set_to_pod_count__1__against_a_pod_with_same_priority_class.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 7.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 46.4s

_sig-storage__PersistentVolumes_NFS_with_multiple_PVs_and_PVCs_all_in_same_ns_should_create_3_PVs_and_3_PVCs__test_write_access__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 138.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Inline-volume__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:53:14.449: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__immediate_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:53:14.061: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__Garbage_collector_should_orphan_pods_created_by_rc_if_deleteOptions.OrphanDependents_is_nil__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.8s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:52.711: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__emptydir___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:52.272: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:51.918: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__blockfs___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:51.561: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__CSI_Ephemeral-volume__default_fs___ephemeral_should_create_read/write_inline_ephemeral_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 167.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 35.8s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:37.856: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_mock_volume_CSIStorageCapacity_CSIStorageCapacity_unused__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 174.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_same_fsgroup_skips_ownership_changes_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:37.523: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__Always__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_new_pod_fsgroup_applied_to_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:37.168: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__ConfigMap_should_be_immutable_if_`immutable`_field_is_set__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 0.9s

_sig-storage__In-tree_Volumes__Driver__gcepd___Testpattern__Pre-provisioned_PV__ext3___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:36.803: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 53.7s

_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:33.070: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:32.689: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Inline-volume__ext4___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:32.377: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:32.053: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:52:31.752: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-api-machinery__ResourceQuota_should_create_a_ResourceQuota_and_capture_the_life_of_a_replication_controller.__Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 11.9s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_single_file__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 42.2s

_sig-auth___Feature_NodeAuthorizer__A_node_shouldn't_be_able_to_create_another_node__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

_sig-node__Probing_container_should_be_restarted_with_a_failing_exec_liveness_probe_that_took_longer_than_the_timeout__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 83.0s

_sig-apps__DisruptionController_evictions__no_PDB_=>_should_allow_an_eviction__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 40.9s

_sig-storage__Zone_Support__Feature_vsphere__Verify_PVC_creation_with_incompatible_storage_policy_along_with_compatible_zone_and_datastore_combination_specified_in_storage_class_fails__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename zone-support
Oct 13 08:51:50.336: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:51:50.454722  922710 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:51:50.454: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106
Oct 13 08:51:50.458: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Zone Support [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-zone-support-2392" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_mount_options__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:51:49.751: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:51:49.399: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.9s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volumemode
Oct 13 08:51:48.763: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:51:48.903988  922669 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:51:48.904: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352
Oct 13 08:51:48.913: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volumemode-1636" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:51:48.027: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ResourceQuota__Feature_PodPriority__should_verify_ResourceQuota's_priority_class_scope__quota_set_to_pod_count__1__against_2_pods_with_different_priority_class.__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 6.9s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Inline-volume__default_fs___subPath_should_support_file_as_subpath__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:51:44.799: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__ext4___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:51:44.353: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__block_volmode___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:51:43.973: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__PersistentVolumes-local___Volume_type__block__Two_pods_mounting_a_local_volume_at_the_same_time_should_be_able_to_write_from_pod1_and_read_from_pod2__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 36.8s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 145.0s

_sig-network__Services_should_allow_pods_to_hairpin_back_to_themselves_through_services__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 175.0s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1033]: Unexpected error:
    <*errors.errorString | 0xc001972380>: {
        s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename services
Oct 13 08:51:22.245: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:51:22.435560  921314 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:51:22.435: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:749
[It] should allow pods to hairpin back to themselves through services [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1007
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace e2e-services-1435
Oct 13 08:51:22.466: INFO: hairpin-test cluster ip: 172.30.138.255
STEP: creating a client/server pod
Oct 13 08:51:22.543: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:24.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:26.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:28.561: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:30.556: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:32.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:34.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:36.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:38.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:40.559: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:42.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:44.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:46.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:48.577: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:50.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:52.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:54.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:56.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:51:58.557: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:52:00.565: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:52:02.561: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:52:04.553: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:52:06.551: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:52:08.562: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 13 08:52:10.556: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace e2e-services-1435 to expose endpoints map[hairpin:[8080]]
Oct 13 08:52:10.586: INFO: successfully validated that service hairpin-test in namespace e2e-services-1435 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Oct 13 08:52:11.588: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:13.903: INFO: rc: 1
Oct 13 08:52:13.903: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:14.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:17.205: INFO: rc: 1
Oct 13 08:52:17.205: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:17.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:20.225: INFO: rc: 1
Oct 13 08:52:20.225: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:20.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:23.181: INFO: rc: 1
Oct 13 08:52:23.181: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:23.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:26.214: INFO: rc: 1
Oct 13 08:52:26.214: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:26.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:29.190: INFO: rc: 1
Oct 13 08:52:29.190: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:29.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:32.188: INFO: rc: 1
Oct 13 08:52:32.188: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:32.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:35.185: INFO: rc: 1
Oct 13 08:52:35.185: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:35.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:38.232: INFO: rc: 1
Oct 13 08:52:38.232: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:38.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:41.265: INFO: rc: 1
Oct 13 08:52:41.265: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:41.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:44.234: INFO: rc: 1
Oct 13 08:52:44.234: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:44.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:47.205: INFO: rc: 1
Oct 13 08:52:47.205: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:47.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:50.187: INFO: rc: 1
Oct 13 08:52:50.187: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:50.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:53.312: INFO: rc: 1
Oct 13 08:52:53.312: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:53.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:56.248: INFO: rc: 1
Oct 13 08:52:56.248: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:56.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:52:59.274: INFO: rc: 1
Oct 13 08:52:59.274: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:52:59.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:02.216: INFO: rc: 1
Oct 13 08:53:02.216: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:02.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:05.199: INFO: rc: 1
Oct 13 08:53:05.199: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:05.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:08.205: INFO: rc: 1
Oct 13 08:53:08.205: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:08.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:11.199: INFO: rc: 1
Oct 13 08:53:11.199: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:11.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:14.240: INFO: rc: 1
Oct 13 08:53:14.240: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:14.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:17.288: INFO: rc: 1
Oct 13 08:53:17.288: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:17.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:20.194: INFO: rc: 1
Oct 13 08:53:20.194: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:20.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:23.191: INFO: rc: 1
Oct 13 08:53:23.192: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:23.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:26.226: INFO: rc: 1
Oct 13 08:53:26.226: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:26.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:29.213: INFO: rc: 1
Oct 13 08:53:29.213: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:29.903: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:32.248: INFO: rc: 1
Oct 13 08:53:32.248: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:32.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:35.255: INFO: rc: 1
Oct 13 08:53:35.255: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:35.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:38.262: INFO: rc: 1
Oct 13 08:53:38.262: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:38.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:41.276: INFO: rc: 1
Oct 13 08:53:41.276: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:41.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:44.343: INFO: rc: 1
Oct 13 08:53:44.343: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:44.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:47.223: INFO: rc: 1
Oct 13 08:53:47.223: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:47.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:50.210: INFO: rc: 1
Oct 13 08:53:50.210: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:50.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:53.218: INFO: rc: 1
Oct 13 08:53:53.218: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:53.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:56.190: INFO: rc: 1
Oct 13 08:53:56.190: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:56.908: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:53:59.234: INFO: rc: 1
Oct 13 08:53:59.234: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:53:59.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:54:02.243: INFO: rc: 1
Oct 13 08:54:02.243: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:54:02.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:54:05.216: INFO: rc: 1
Oct 13 08:54:05.216: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:54:05.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:54:08.282: INFO: rc: 1
Oct 13 08:54:08.282: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:54:08.903: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:54:11.185: INFO: rc: 1
Oct 13 08:54:11.185: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:54:11.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:54:14.300: INFO: rc: 1
Oct 13 08:54:14.300: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -v -t -w 2 hairpin-test 8080
+ echo hostName
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 13 08:54:14.300: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 13 08:54:16.622: INFO: rc: 1
Oct 13 08:54:16.623: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
[AfterEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "e2e-services-1435".
STEP: Found 5 events.
Oct 13 08:54:16.638: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hairpin: { } Scheduled: Successfully assigned e2e-services-1435/hairpin to ostest-n5rnf-worker-0-94fxs
Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:07 +0000 UTC - event for hairpin: {multus } AddedInterface: Add eth0 [10.128.174.191/23] from kuryr
Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:08 +0000 UTC - event for hairpin: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine
Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:08 +0000 UTC - event for hairpin: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container agnhost-container
Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:08 +0000 UTC - event for hairpin: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container agnhost-container
Oct 13 08:54:16.642: INFO: POD      NODE                         PHASE    GRACE  CONDITIONS
Oct 13 08:54:16.642: INFO: hairpin  ostest-n5rnf-worker-0-94fxs  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:51:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:52:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:52:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:51:22 +0000 UTC  }]
Oct 13 08:54:16.642: INFO: 
Oct 13 08:54:16.654: INFO: skipping dumping cluster info - cluster too large
STEP: Destroying namespace "e2e-services-1435" for this suite.
[AfterEach] [sig-network] Services
  k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:753
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1033]: Unexpected error:
    <*errors.errorString | 0xc001972380>: {
        s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol
occurred

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 91.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 32.2s

_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 210.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__filesystem_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:28.617: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-network__DNS_should_resolve_DNS_of_partial_qualified_names_for_the_cluster__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 81.0s

_sig-storage__In-tree_Volumes__Driver__azure-disk___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:26.424: INFO: Only supported for providers [azure] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:26.069: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:25.663: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-node__Security_Context_When_creating_a_pod_with_readOnlyRootFilesystem_should_run_the_container_with_writable_rootfs_when_readOnlyRootFilesystem=false__NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 55.3s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Pre-provisioned_PV__ntfs___Feature_Windows__volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:25.218: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__block_volmode___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:24.974: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Pre-provisioned_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:24.761: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping

Stderr
_sig-node__Security_Context_When_creating_a_pod_with_privileged_should_run_the_container_as_unprivileged_when_false__LinuxOnly___NodeConformance___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 57.4s

_sig-node__Probing_container_should_override_timeoutGracePeriodSeconds_when_LivenessProbe_field_is_set__Feature_ProbeTerminationGracePeriod___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 66.0s

_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__block_volmode___volumeMode_should_not_mount_/_map_unused_volumes_in_a_pod__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:24.311: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir___Testpattern__Dynamic_PV__default_fs__allowExpansion___volume-expand_should_resize_volume_when_PVC_is_edited_while_pod_is_using_it__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:24.128: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:23.965: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link-bindmounted___Testpattern__Dynamic_PV__default_fs___capacity_provides_storage_capacity_information__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:23.759: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__Volume_Disk_Format__Feature_vsphere__verify_disk_format_type_-_eagerzeroedthick_is_honored_for_dynamically_provisioned_pv_using_storageclass__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume-disk-format
Oct 13 08:50:24.335: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:50:24.560161  919226 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:50:24.560: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:70
Oct 13 08:50:24.564: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [sig-storage] Volume Disk Format [Feature:vsphere]
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-disk-format-3988" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:23.532: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directory__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:23.120: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__default_fs___volumes_should_allow_exec_of_files_on_the_volume__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "csi-hostpath" does not support exec - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename volume
Oct 13 08:50:23.725: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:50:24.180933  919158 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:50:24.181: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:196
Oct 13 08:50:24.188: INFO: Driver "csi-hostpath" does not support exec - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-volume-7948" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "csi-hostpath" does not support exec - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs__allowExpansion___Feature_Windows__volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:23.083: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_Snapshot_Controller_metrics__Feature_VolumeSnapshotDataSource__snapshot_controller_should_emit_dynamic_CreateSnapshot,_CreateSnapshotAndReady,_and_DeleteSnapshot_metrics__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 134.0s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1786]: Snapshot controller metrics not found -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-storage] CSI mock volume
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename csi-mock-volumes
Oct 13 08:50:23.383: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:50:23.631512  919134 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:50:23.631: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1765
STEP: Building a driver namespace object, basename e2e-csi-mock-volumes-7587
Oct 13 08:50:24.222: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
STEP: deploying csi mock driver
Oct 13 08:50:24.496: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-attacher
Oct 13 08:50:24.534: INFO: creating *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.534: INFO: Define cluster role external-attacher-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.549: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.569: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-attacher-cfg-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.584: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-attacher-role-cfg
Oct 13 08:50:24.607: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-provisioner
Oct 13 08:50:24.640: INFO: creating *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.640: INFO: Define cluster role external-provisioner-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.653: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.665: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-provisioner-cfg-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.691: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-provisioner-role-cfg
Oct 13 08:50:24.706: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-resizer
Oct 13 08:50:24.715: INFO: creating *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.716: INFO: Define cluster role external-resizer-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.760: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.777: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-resizer-cfg-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.800: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-resizer-role-cfg
Oct 13 08:50:24.830: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-snapshotter
Oct 13 08:50:24.839: INFO: creating *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.839: INFO: Define cluster role external-snapshotter-runner-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.867: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.888: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.899: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection
Oct 13 08:50:24.941: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-mock
Oct 13 08:50:24.955: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:24.994: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.007: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.022: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.058: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.079: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.096: INFO: creating *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.120: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin
Oct 13 08:50:25.153: INFO: creating *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-7587
Oct 13 08:50:25.167: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin-snapshotter
Oct 13 08:50:25.184: INFO: waiting up to 4m0s for CSIDriver "csi-mock-e2e-csi-mock-volumes-7587"
Oct 13 08:50:25.201: INFO: waiting for CSIDriver csi-mock-e2e-csi-mock-volumes-7587 to register on node ostest-n5rnf-worker-0-j4pkp
W1013 08:51:29.760407  919134 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
W1013 08:51:29.760437  919134 metrics_grabber.go:151] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
Oct 13 08:51:29.760: INFO: Snapshot controller metrics not found -- skipping
STEP: Cleaning up resources
STEP: deleting the test namespace: e2e-csi-mock-volumes-7587
STEP: Waiting for namespaces [e2e-csi-mock-volumes-7587] to vanish
STEP: uninstalling csi mock driver
Oct 13 08:52:01.850: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-attacher
Oct 13 08:52:01.864: INFO: deleting *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-7587
Oct 13 08:52:01.883: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:01.925: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-attacher-cfg-e2e-csi-mock-volumes-7587
Oct 13 08:52:01.962: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-attacher-role-cfg
Oct 13 08:52:01.987: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-provisioner
Oct 13 08:52:02.005: INFO: deleting *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.039: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.065: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-provisioner-cfg-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.084: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-provisioner-role-cfg
Oct 13 08:52:02.108: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-resizer
Oct 13 08:52:02.130: INFO: deleting *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.151: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.180: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-resizer-cfg-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.199: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-resizer-role-cfg
Oct 13 08:52:02.228: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-snapshotter
Oct 13 08:52:02.260: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.271: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.304: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.347: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection
Oct 13 08:52:02.365: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-mock
Oct 13 08:52:02.390: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.406: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.421: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.444: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.466: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.497: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.506: INFO: deleting *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.519: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin
Oct 13 08:52:02.534: INFO: deleting *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-7587
Oct 13 08:52:02.560: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin-snapshotter
STEP: deleting the driver namespace: e2e-csi-mock-volumes-7587-5329
STEP: Waiting for namespaces [e2e-csi-mock-volumes-7587-5329] to vanish
[AfterEach] [sig-storage] CSI mock volume
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1786]: Snapshot controller metrics not found -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:22.716: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-node__Sysctls__LinuxOnly___NodeConformance__should_reject_invalid_sysctls__MinimumKubeletVersion_1.21___Conformance___Suite_openshift/conformance/parallel/minimal___Suite_k8s_
no-testclass
Time Taken: 1.2s

_sig-storage__In-tree_Volumes__Driver__cinder___Testpattern__Dynamic_PV__default_fs___volume-expand_should_not_allow_expansion_of_pvcs_without_AllowVolumeExpansion_property__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:22.588: INFO: Driver "cinder" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___provisioning_should_provision_storage_with_snapshot_data_source__Feature_VolumeSnapshotDataSource___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:22.330: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:22.239: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__windows-gcepd___Testpattern__Dynamic_PV__ntfs___Feature_Windows__provisioning_should_provision_storage_with_pvc_data_source__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:22.090: INFO: Only supported for providers [gce gke] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-link___Testpattern__Dynamic_PV__ntfs___Feature_Windows__subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:22.065: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__nfs___Testpattern__Dynamic_PV__default_fs___subPath_should_support_existing_directories_when_readOnly_specified_in_the_volumeSource__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 123.0s

_sig-storage__In-tree_Volumes__Driver__hostPath___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_directory_specified_in_the_volumeMount__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.863: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Dynamic_PV__block_volmode__allowExpansion___volume-expand_Verify_if_offline_PVC_expansion_works__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.4s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.766: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-apps__Deployment_should_not_disrupt_a_cloud_load-balancer's_connectivity_during_rollout__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 1.3s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:162]: Only supported for providers [aws azure gce gke] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-apps] Deployment
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename deployment
Oct 13 08:50:22.269: INFO: About to run a Kube e2e test, ensuring namespace is privileged
W1013 08:50:22.463260  918993 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 08:50:22.463: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:89
[It] should not disrupt a cloud load-balancer's connectivity during rollout [Suite:openshift/conformance/parallel] [Suite:k8s]
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:161
Oct 13 08:50:22.476: INFO: Only supported for providers [aws azure gce gke] (not openstack)
[AfterEach] [sig-apps] Deployment
  k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:83
Oct 13 08:50:22.488: INFO: Log out all the ReplicaSets if there is no deployment created
[AfterEach] [sig-apps] Deployment
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-deployment-2304" for this suite.
skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:162]: Only supported for providers [aws azure gce gke] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__dir-bindmounted___Testpattern__Dynamic_PV__default_fs___fsgroupchangepolicy__OnRootMismatch__LinuxOnly_,_pod_created_with_an_initial_fsgroup,_volume_contents_ownership_changed_in_first_pod,_new_pod_with_different_fsgroup_applied_to_the_volume_contents__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.597: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__CSI_Volumes__Driver__csi-hostpath___Testpattern__Dynamic_PV__delayed_binding___topology_should_fail_to_schedule_a_pod_which_has_topologies_that_conflict_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.562: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__aws___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.352: INFO: Only supported for providers [aws] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)

Stderr
_sig-storage__In-tree_Volumes__Driver__vsphere___Testpattern__Dynamic_PV__immediate_binding___topology_should_provision_a_volume_and_schedule_a_pod_with_AllowedTopologies__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.6s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.229: INFO: Only supported for providers [vsphere] (not openstack)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)

Stderr
_sig-storage__CSI_mock_volume_CSI_attach_test_using_mock_driver_should_not_require_VolumeAttach_for_drivers_without_attachment__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 189.0s

_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.8s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.366: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Dynamic_PV__block_volmode___volumes_should_store_data__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.109: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__block___Testpattern__Inline-volume__default_fs___subPath_should_support_readOnly_file_specified_in_the_volumeMount__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.5s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.092: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping

Stderr
_sig-storage__In-tree_Volumes__Driver__local__LocalVolumeType__tmpfs___Testpattern__Pre-provisioned_PV__default_fs___subPath_should_support_non-existent_path__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 50.5s

_sig-storage__In-tree_Volumes__Driver__hostPathSymlink___Testpattern__Dynamic_PV__default_fs___subPath_should_be_able_to_unmount_after_the_subpath_directory_is_deleted__LinuxOnly___Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 0.7s

Skipped: skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51
Oct 13 08:50:21.314: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping

Stderr
_sig-api-machinery__ServerSideApply_should_work_for_CRDs__Suite_openshift/conformance/parallel___Suite_k8s_
no-testclass
Time Taken: 7.7s

_sig-arch__Managed_cluster_should_ensure_control_plane_pods_do_not_run_in_best-effort_QoS__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.7s

Failed:
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:13:38.852: 
7 pods found in best-effort QoS:
openshift-kuryr/kuryr-cni-2rrvs is running in best-effort QoS
openshift-kuryr/kuryr-cni-cjcgk is running in best-effort QoS
openshift-kuryr/kuryr-cni-crfvc is running in best-effort QoS
openshift-kuryr/kuryr-cni-ndzt5 is running in best-effort QoS
openshift-kuryr/kuryr-cni-t448w is running in best-effort QoS
openshift-kuryr/kuryr-cni-xzbzv is running in best-effort QoS
openshift-kuryr/kuryr-controller-7654df4d98-f2qvz is running in best-effort QoS

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] ensure control plane pods do not run in best-effort QoS [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/operators/qos.go:20
Oct 13 10:13:38.852: FAIL: 
7 pods found in best-effort QoS:
openshift-kuryr/kuryr-cni-2rrvs is running in best-effort QoS
openshift-kuryr/kuryr-cni-cjcgk is running in best-effort QoS
openshift-kuryr/kuryr-cni-crfvc is running in best-effort QoS
openshift-kuryr/kuryr-cni-ndzt5 is running in best-effort QoS
openshift-kuryr/kuryr-cni-t448w is running in best-effort QoS
openshift-kuryr/kuryr-cni-xzbzv is running in best-effort QoS
openshift-kuryr/kuryr-controller-7654df4d98-f2qvz is running in best-effort QoS

Full Stack Trace
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0030f4e68)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7f90603504c8)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001e73680, 0xc0030f5230, {0x83433a0, 0xc000330940})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001e73680, {0x83433a0, 0xc000330940})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001e2ab40, 0xc001e73680)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001e2ab40)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001e2ab40)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00031e780, {0x8343660, 0xc001b60e10}, {0x0, 0x7f90385531b8}, {0xc000f9a010, 0x1, 0x1}, {0x843fe58, 0xc000330940}, ...)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc000a3fad0, {0xc00064a7f0, 0xb8fc7b0, 0x457d780})
	github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be
main.newRunTestCommand.func1.1()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32
github.com/openshift/origin/test/extended/util.WithCleanup(0xc0019bfc18)
	github.com/openshift/origin/test/extended/util/test.go:168 +0xad
main.newRunTestCommand.func1(0xc001d89680, {0xc00064a7f0, 0x1, 0x1})
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a
github.com/spf13/cobra.(*Command).execute(0xc001d89680, {0xc00064a7b0, 0x1, 0x1})
	github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0xc001d88c80)
	github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.1.3/command.go:897
main.main.func1(0xc000b54700)
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a
main.main()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6
[AfterEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:141
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:13:38.852: 
7 pods found in best-effort QoS:
openshift-kuryr/kuryr-cni-2rrvs is running in best-effort QoS
openshift-kuryr/kuryr-cni-cjcgk is running in best-effort QoS
openshift-kuryr/kuryr-cni-crfvc is running in best-effort QoS
openshift-kuryr/kuryr-cni-ndzt5 is running in best-effort QoS
openshift-kuryr/kuryr-cni-t448w is running in best-effort QoS
openshift-kuryr/kuryr-cni-xzbzv is running in best-effort QoS
openshift-kuryr/kuryr-controller-7654df4d98-f2qvz is running in best-effort QoS

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_initially_should_not_deploy_if_pods_never_transition_to_ready__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 100.0s

_sig-cli__oc_debug_ensure_debug_does_not_depend_on_a_container_actually_existing_for_the_selected_resource__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.3s

_sig-cli__oc_explain_networking_types_when_using_openshift-sdn_should_contain_proper_fields_description_for_special_networking_types__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-cli] oc explain networking types
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-cli] oc explain networking types
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:36:43.646: INFO: configPath is now "/tmp/configfile1982570296"
Oct 13 10:36:43.646: INFO: The user is now "e2e-test-oc-explain-w8cqf-user"
Oct 13 10:36:43.646: INFO: Creating project "e2e-test-oc-explain-w8cqf"
Oct 13 10:36:43.884: INFO: Waiting on permissions in project "e2e-test-oc-explain-w8cqf" ...
Oct 13 10:36:43.893: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:36:44.012: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:36:44.138: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:36:44.254: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:36:44.266: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:36:44.279: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:36:44.895: INFO: Project "e2e-test-oc-explain-w8cqf" has been fully provisioned.
[BeforeEach] when using openshift-sdn
  github.com/openshift/origin/test/extended/networking/util.go:396
Oct 13 10:36:45.049: INFO: Not using openshift-sdn
[AfterEach] [sig-cli] oc explain networking types
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:36:45.075: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-oc-explain-w8cqf-user}, err: <nil>
Oct 13 10:36:45.088: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-oc-explain-w8cqf}, err: <nil>
Oct 13 10:36:45.101: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~n3wLJS5No_EjiC0c_09c7pFgD1xK1_UpvxQm38M8qzs}, err: <nil>
[AfterEach] [sig-cli] oc explain networking types
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-oc-explain-w8cqf" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn

Stderr
_sig-network__Feature_Router__The_HAProxy_router_should_override_the_route_host_with_a_custom_value__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 123.0s

_sig-cli__oc_builds_patch_buildconfig__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.3s

_sig-operator__OLM_should_be_installed_with_catalogsources_at_version_v1alpha1__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_image.openshift.io/v1,_Resource=imagestreams__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestSimpleImageChangeBuildTriggerFromImageStreamTagDocker__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.3s

_sig-builds__Feature_Builds__prune_builds_based_on_settings_in_the_buildconfig__should_prune_completed_builds_based_on_the_successfulBuildsHistoryLimit_setting__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 285.0s

_sig-arch__Managed_cluster_should_only_include_cluster_daemonsets_that_have_maxUnavailable_update_of_10_or_33_percent__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

Failed:
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:35:48.384: Daemonsets found that do not meet platform requirements for update strategy:
  expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch] Managed cluster
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/operators/daemon_set.go:41
Oct 13 10:35:48.384: INFO: Daemonset configuration in payload:
daemonset openshift-cluster-csi-drivers/openstack-cinder-csi-driver-node has 10%
daemonset openshift-cluster-node-tuning-operator/tuned has 10%
daemonset openshift-dns/dns-default has 10%
daemonset openshift-dns/node-resolver has 33%
daemonset openshift-image-registry/node-ca has 10%
daemonset openshift-ingress-canary/ingress-canary has 10%
daemonset openshift-machine-config-operator/machine-config-daemon has 10%
daemonset openshift-manila-csi-driver/csi-nodeplugin-nfsplugin has 10%
daemonset openshift-manila-csi-driver/openstack-manila-csi-nodeplugin has 10%
daemonset openshift-monitoring/node-exporter has 10%
daemonset openshift-multus/multus has 10%
daemonset openshift-multus/multus-additional-cni-plugins has 10%
daemonset openshift-multus/network-metrics-daemon has 33%
daemonset openshift-network-diagnostics/network-check-target has 10%
expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1
Oct 13 10:35:48.384: FAIL: Daemonsets found that do not meet platform requirements for update strategy:
  expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1

Full Stack Trace
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc002b34e68)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7f54a917bfff)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001f56a50, 0xc002b35230, {0x83433a0, 0xc00038a900})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001f56a50, {0x83433a0, 0xc00038a900})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0017ecc80, 0xc001f56a50)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0017ecc80)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0017ecc80)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000376780, {0x8343660, 0xc000deb270}, {0x0, 0x0}, {0xc000c6e360, 0x1, 0x1}, {0x843fe58, 0xc00038a900}, ...)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc00169c360, {0xc000dfb9b0, 0xb8fc7b0, 0x457d780})
	github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be
main.newRunTestCommand.func1.1()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32
github.com/openshift/origin/test/extended/util.WithCleanup(0xc001ebfc18)
	github.com/openshift/origin/test/extended/util/test.go:168 +0xad
main.newRunTestCommand.func1(0xc001df1b80, {0xc000dfb9b0, 0x1, 0x1})
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a
github.com/spf13/cobra.(*Command).execute(0xc001df1b80, {0xc000dfb980, 0x1, 0x1})
	github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0xc001df1180)
	github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.1.3/command.go:897
main.main.func1(0xc00196f200)
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a
main.main()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6
[AfterEach] [sig-arch] Managed cluster
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-arch] Managed cluster
  github.com/openshift/origin/test/extended/util/client.go:141
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:35:48.384: Daemonsets found that do not meet platform requirements for update strategy:
  expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1

Stderr
_sig-instrumentation__Prometheus_when_installed_on_the_cluster_should_have_non-Pod_host_cAdvisor_metrics__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 200.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:468]: Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "promQL query returned unexpected results:\ncontainer_cpu_usage_seconds_total{id!~\"/kubepods.slice/.*\"} >= 1\n[]",
        },
    ]
    promQL query returned unexpected results:
    container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
    []
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[It] should have non-Pod host cAdvisor metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:457
Oct 13 10:35:48.785: INFO: Creating namespace "e2e-test-prometheus-jskqg"
Oct 13 10:35:49.101: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:35:49.226: INFO: Creating new exec pod
STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
Oct 13 10:38:13.413: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"'
Oct 13 10:38:13.754: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n"
Oct 13 10:38:13.754: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
Oct 13 10:38:23.757: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"'
Oct 13 10:38:24.137: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n"
Oct 13 10:38:24.137: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
Oct 13 10:38:34.143: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"'
Oct 13 10:38:34.507: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n"
Oct 13 10:38:34.507: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
Oct 13 10:38:44.509: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"'
Oct 13 10:38:44.853: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n"
Oct 13 10:38:44.853: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
Oct 13 10:38:54.858: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"'
Oct 13 10:38:55.391: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n"
Oct 13 10:38:55.392: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-jskqg".
STEP: Found 5 events.
Oct 13 10:39:05.622: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-jskqg/execpod to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.222.102/23] from kuryr
Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine
Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 10:39:05.633: INFO: POD      NODE                         PHASE    GRACE  CONDITIONS
Oct 13 10:39:05.633: INFO: execpod  ostest-n5rnf-worker-0-j4pkp  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:35:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:38:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:38:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:35:49 +0000 UTC  }]
Oct 13 10:39:05.633: INFO: 
Oct 13 10:39:05.653: INFO: skipping dumping cluster info - cluster too large
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-jskqg" for this suite.
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:468]: Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "promQL query returned unexpected results:\ncontainer_cpu_usage_seconds_total{id!~\"/kubepods.slice/.*\"} >= 1\n[]",
        },
    ]
    promQL query returned unexpected results:
    container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1
    []
occurred

Stderr
_sig-cli__oc_adm_build-chain__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.0s

_sig-auth__Feature_OAuthServer___Token_Expiration__Using_a_OAuth_client_with_a_non-default_token_max_age_to_generate_tokens_that_expire_shortly_works_as_expected_when_using_a_token_authorization_flow__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 156.0s

_sig-builds__Feature_Builds__volumes__build_volumes__should_mount_given_secrets_and_configmaps_into_the_build_pod_for_source_strategy_builds__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 216.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_oauth.openshift.io/v1,_Resource=oauthclients__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-cli__oc_debug_does_not_require_a_real_resource_on_the_server__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.1s

_sig-network__multicast_when_using_one_of_the_OpenshiftSDN_modes_'redhat/openshift-ovs-multitenant,_redhat/openshift-ovs-networkpolicy'_should_allow_multicast_traffic_in_namespaces_where_it_is_enabled__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.8s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:35:40.357: INFO: configPath is now "/tmp/configfile436598500"
Oct 13 10:35:40.357: INFO: The user is now "e2e-test-multicast-ntz2z-user"
Oct 13 10:35:40.357: INFO: Creating project "e2e-test-multicast-ntz2z"
Oct 13 10:35:40.770: INFO: Waiting on permissions in project "e2e-test-multicast-ntz2z" ...
Oct 13 10:35:40.781: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:35:40.916: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:35:41.057: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:35:41.169: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:35:41.183: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:35:41.260: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:35:41.921: INFO: Project "e2e-test-multicast-ntz2z" has been fully provisioned.
[BeforeEach] when using one of the OpenshiftSDN modes 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy'
  github.com/openshift/origin/test/extended/networking/util.go:375
Oct 13 10:35:42.342: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:35:42.342: INFO: Not using one of the specified OpenshiftSDN modes
[AfterEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:35:42.397: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-multicast-ntz2z-user}, err: <nil>
Oct 13 10:35:42.453: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-multicast-ntz2z}, err: <nil>
Oct 13 10:35:42.492: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~DAnp4EtEmByqJLDhdDKQObzkv1UNUw-4pIaFTmE5_eI}, err: <nil>
[AfterEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-multicast-ntz2z" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes

Stderr
_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Create_a_RBAC_rolebinding_when_subject_is_not_already_bound_and_is_not_permitted_by_any_RBR_should_fail__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.6s

_sig-arch__ClusterOperators_should_define_at_least_one_namespace_in_their_lists_of_related_objects__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-auth__Feature_OAuthServer__OAuth_server_should_use_http1.1_only_to_prevent_http2_connection_reuse__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

_sig-imageregistry__Feature_Image__oc_tag_should_preserve_image_reference_for_external_images__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.4s

_sig-devex__check_registry.redhat.io_is_available_and_samples_operator_can_import_sample_imagestreams_run_sample_related_validations__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 14.9s

_sig-auth__Feature_ProjectAPI___TestProjectWatchWithSelectionPredicate_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 11.7s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_keep_the_deployer_pod_invariant_valid_should_deal_with_cancellation_of_running_deployment__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 149.0s

_sig-network-edge__Feature_Idling__Idling_with_a_single_service_and_ReplicationController_should_idle_the_service_and_ReplicationController_properly__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 106.0s

_sig-imageregistry__Feature_ImageLookup__Image_policy_should_perform_lookup_when_the_Deployment_gets_the_resolve-names_annotation_later__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 7.3s

_sig-auth__Feature_OpenShiftAuthorization__scopes_TestScopedImpersonation_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.8s

_sig-builds__Feature_Builds__prune_builds_based_on_settings_in_the_buildconfig__buildconfigs_should_have_a_default_history_limit_set_when_created_via_the_group_api__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 15.4s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_login_URL_for_the_allow_all_IDP__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 44.8s

_sig-auth__Feature_OpenShiftAuthorization__authorization__TestAuthorizationSubjectAccessReview_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 17.2s

_sig-devex__Feature_Templates__template-api_TestTemplateTransformationFromConfig__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

_Conformance__sig-api-machinery__Feature_APIServer__local_kubeconfig__lb-ext.kubeconfig__should_be_present_on_all_masters_and_work__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 8.8s

_sig-auth__Feature_OAuthServer__well-known_endpoint_should_be_reachable__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-auth__Feature_OAuthServer___Token_Expiration__Using_a_OAuth_client_with_a_non-default_token_max_age_to_generate_tokens_that_do_not_expire_works_as_expected_when_using_a_token_authorization_flow__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 55.9s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_isolates_namespaces_by_default_should_allow_communication_from_non-default_to_default_namespace_on_the_same_node__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:34:37.471: INFO: configPath is now "/tmp/configfile3214545341"
Oct 13 10:34:37.471: INFO: The user is now "e2e-test-ns-global-jv2w9-user"
Oct 13 10:34:37.471: INFO: Creating project "e2e-test-ns-global-jv2w9"
Oct 13 10:34:37.726: INFO: Waiting on permissions in project "e2e-test-ns-global-jv2w9" ...
Oct 13 10:34:37.739: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:34:37.867: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:34:37.947: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:34:38.069: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:34:38.180: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:34:38.191: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:34:38.208: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:34:38.841: INFO: Project "e2e-test-ns-global-jv2w9" has been fully provisioned.
[BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  github.com/openshift/origin/test/extended/networking/util.go:350
Oct 13 10:34:39.160: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:34:39.160: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:34:39.181: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-ns-global-jv2w9-user}, err: <nil>
Oct 13 10:34:39.204: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-ns-global-jv2w9}, err: <nil>
Oct 13 10:34:39.229: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~N69OC9mjdQku8C-lluKw1YrhLt77fuH74XMh9zgdtb0}, err: <nil>
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-ns-global-jv2w9" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_ignores_deployer_and_lets_the_config_with_a_NewReplicationControllerCreated_reason_should_let_the_deployment_config_with_a_NewReplicationControllerCreated_reason__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.4s

_sig-network__Feature_Router__The_HAProxy_router_should_expose_prometheus_metrics_for_a_route__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 92.0s

_sig-api-machinery__Feature_APIServer__authenticated_browser_should_get_a_200_from_/__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_login_URL_for_the_bootstrap_IDP__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 44.8s

_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Create_a_rolebinding_when_subject_is_permitted_by_RBR_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.1s

_sig-operator__an_end_user_can_use_OLM_Report_Upgradeable_in_OLM_ClusterOperators_status__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-network__Feature_Router__The_HAProxy_router_should_respond_with_503_to_unrecognized_hosts__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 53.3s

_sig-cli__oc_builds_get_buildconfig__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.7s

_sig-network__Feature_Router__The_HAProxy_router_should_serve_a_route_that_points_to_two_services_and_respect_weights__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 143.0s

_sig-cli__oc_--request-timeout_works_as_expected__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.1s

_sig-builds__Feature_Builds__prune_builds_based_on_settings_in_the_buildconfig__should_prune_canceled_builds_based_on_the_failedBuildsHistoryLimit_setting__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 23.1s

_sig-builds__Feature_Builds__prune_builds_based_on_settings_in_the_buildconfig__should_prune_failed_builds_based_on_the_failedBuildsHistoryLimit_setting__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 100.0s

_sig-network__multicast_when_using_one_of_the_OpenshiftSDN_modes_'redhat/openshift-ovs-subnet'_should_block_multicast_traffic__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:33:48.947: INFO: configPath is now "/tmp/configfile3701117800"
Oct 13 10:33:48.947: INFO: The user is now "e2e-test-multicast-hsldw-user"
Oct 13 10:33:48.947: INFO: Creating project "e2e-test-multicast-hsldw"
Oct 13 10:33:49.207: INFO: Waiting on permissions in project "e2e-test-multicast-hsldw" ...
Oct 13 10:33:49.225: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:33:49.338: INFO: Waiting for service account "default" secrets (default-dockercfg-z7rtd,default-dockercfg-z7rtd) to include dockercfg/token ...
Oct 13 10:33:49.431: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:33:49.548: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:33:49.658: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:33:49.670: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:33:49.682: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:33:50.222: INFO: Project "e2e-test-multicast-hsldw" has been fully provisioned.
[BeforeEach] when using one of the OpenshiftSDN modes 'redhat/openshift-ovs-subnet'
  github.com/openshift/origin/test/extended/networking/util.go:375
Oct 13 10:33:50.511: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:33:50.511: INFO: Not using one of the specified OpenshiftSDN modes
[AfterEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:33:50.597: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-multicast-hsldw-user}, err: <nil>
Oct 13 10:33:50.636: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-multicast-hsldw}, err: <nil>
Oct 13 10:33:50.687: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~bt-vq4sHjzx-mxI1hH8wwDVcNmWDv-wEZ8l6p8AdMb4}, err: <nil>
[AfterEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-multicast-hsldw" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_keep_the_deployer_pod_invariant_valid_should_deal_with_cancellation_after_deployer_pod_succeeded__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 70.0s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_logout_URL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 64.0s

_sig-storage__Managed_cluster_should_have_no_crashlooping_recycler_pods_over_four_minutes__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 242.0s

_sig-apps__Feature_OpenShiftControllerManager__TestTriggers_configChange__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.7s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_failing_hook_should_get_all_logs_from_retried_hooks__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 94.0s

_sig-auth__Feature_SecurityContextConstraints___TestPodUpdateSCCEnforcement__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.5s

_sig-builds__Feature_Builds__pullsecret__docker_build_using_a_pull_secret__Building_from_a_template_should_create_a_docker_build_that_pulls_using_a_secret_run_it__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 120.0s

_sig-builds__Feature_Builds__s2i_build_with_a_root_user_image_should_create_a_root_build_and_pass_with_a_privileged_SCC__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 71.0s

_sig-auth__Feature_ProjectAPI___TestInvalidRoleRefs_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 11.1s

_sig-builds__Feature_Builds__valueFrom__process_valueFrom_in_build_strategy_environment_variables__should_fail_resolving_unresolvable_valueFrom_in_sti_build_environment_variable_references__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 21.6s

_sig-builds__Feature_Builds__valueFrom__process_valueFrom_in_build_strategy_environment_variables__should_successfully_resolve_valueFrom_in_docker_build_environment_variables__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 87.0s

_sig-cli__oc_adm_role-reapers__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 8.2s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs__should_adhere_to_Three_Laws_of_Controllers__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 120.0s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_isolates_namespaces_by_default_should_allow_communication_from_default_to_non-default_namespace_on_a_different_node__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.0s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:32:18.686: INFO: configPath is now "/tmp/configfile3983327930"
Oct 13 10:32:18.686: INFO: The user is now "e2e-test-ns-global-d5xlg-user"
Oct 13 10:32:18.686: INFO: Creating project "e2e-test-ns-global-d5xlg"
Oct 13 10:32:18.918: INFO: Waiting on permissions in project "e2e-test-ns-global-d5xlg" ...
Oct 13 10:32:18.928: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:32:19.048: INFO: Waiting for service account "default" to be available: serviceaccounts "default" not found (will retry) ...
Oct 13 10:32:19.140: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:32:19.260: INFO: Waiting for service account "default" secrets (default-token-dlkvz) to include dockercfg/token ...
Oct 13 10:32:19.382: INFO: Waiting for service account "default" secrets (default-token-dlkvz) to include dockercfg/token ...
Oct 13 10:32:19.467: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:32:19.580: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:32:19.698: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:32:19.712: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:32:19.745: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:32:20.440: INFO: Project "e2e-test-ns-global-d5xlg" has been fully provisioned.
[BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  github.com/openshift/origin/test/extended/networking/util.go:350
Oct 13 10:32:20.913: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:32:20.913: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:32:20.945: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-ns-global-d5xlg-user}, err: <nil>
Oct 13 10:32:20.971: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-ns-global-d5xlg}, err: <nil>
Oct 13 10:32:20.992: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~CqcLbMW5wzS3clKDWTSUEQFc2ycy1HUnZXgjgli3C2k}, err: <nil>
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-ns-global-d5xlg" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.

Stderr
_sig-auth__Feature_OpenShiftAuthorization__authorization__TestAuthorizationSubjectAccessReviewAPIGroup_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.5s

_sig-network__Feature_Network_Policy_Audit_logging__when_using_openshift_ovn-kubernetes_should_ensure_acl_logs_are_created_and_correct__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.1s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:412]: Not using openshift-sdn
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network][Feature:Network Policy Audit logging]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network][Feature:Network Policy Audit logging]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:32:15.535: INFO: configPath is now "/tmp/configfile2551294290"
Oct 13 10:32:15.535: INFO: The user is now "e2e-test-acl-logging-fh7fx-user"
Oct 13 10:32:15.535: INFO: Creating project "e2e-test-acl-logging-fh7fx"
Oct 13 10:32:15.806: INFO: Waiting on permissions in project "e2e-test-acl-logging-fh7fx" ...
Oct 13 10:32:15.819: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:32:15.932: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:32:16.040: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:32:16.149: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:32:16.162: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:32:16.174: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:32:16.783: INFO: Project "e2e-test-acl-logging-fh7fx" has been fully provisioned.
[BeforeEach] when using openshift ovn-kubernetes
  github.com/openshift/origin/test/extended/networking/util.go:410
Oct 13 10:32:16.931: INFO: Not using openshift-sdn
[AfterEach] [sig-network][Feature:Network Policy Audit logging]
  github.com/openshift/origin/test/extended/networking/acl_audit_log.go:32
[AfterEach] [sig-network][Feature:Network Policy Audit logging]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:32:16.953: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-acl-logging-fh7fx-user}, err: <nil>
Oct 13 10:32:16.987: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-acl-logging-fh7fx}, err: <nil>
Oct 13 10:32:17.009: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~COJG-WmJ9YYsASzt3fQ9kjLZAY0JstrJV7PSlzxELVc}, err: <nil>
[AfterEach] [sig-network][Feature:Network Policy Audit logging]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-acl-logging-fh7fx" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:412]: Not using openshift-sdn

Stderr
_sig-builds__Feature_Builds__imagechangetriggers__imagechangetriggers_should_trigger_builds_of_all_types__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.7s

_sig-arch__Managed_cluster_should_should_expose_cluster_services_outside_the_cluster__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.0s

Skipped: skip [github.com/openshift/origin/test/extended/operators/routable.go:41]: default router is not exposed by a load balancer service
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:32:11.017: INFO: configPath is now "/tmp/configfile4118421892"
Oct 13 10:32:11.017: INFO: The user is now "e2e-test-operators-routable-kbc74-user"
Oct 13 10:32:11.017: INFO: Creating project "e2e-test-operators-routable-kbc74"
Oct 13 10:32:11.985: INFO: Waiting on permissions in project "e2e-test-operators-routable-kbc74" ...
Oct 13 10:32:11.998: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:32:12.124: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:32:12.274: INFO: Waiting for service account "deployer" secrets (deployer-dockercfg-ld9w8,deployer-dockercfg-ld9w8) to include dockercfg/token ...
Oct 13 10:32:12.338: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:32:12.446: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:32:12.468: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:32:12.546: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:32:13.232: INFO: Project "e2e-test-operators-routable-kbc74" has been fully provisioned.
[BeforeEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/operators/routable.go:34
[AfterEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:32:13.324: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-operators-routable-kbc74-user}, err: <nil>
Oct 13 10:32:13.347: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-operators-routable-kbc74}, err: <nil>
Oct 13 10:32:13.367: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~U07id3P-s7MxFuISAXEzWg1zBQ0PLfIjILHuf88FNQM}, err: <nil>
[AfterEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-operators-routable-kbc74" for this suite.
skip [github.com/openshift/origin/test/extended/operators/routable.go:41]: default router is not exposed by a load balancer service

Stderr
_sig-builds__Feature_Builds__valueFrom__process_valueFrom_in_build_strategy_environment_variables__should_fail_resolving_unresolvable_valueFrom_in_docker_build_environment_variable_references__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 21.4s

_sig-network__Feature_Router__The_HAProxy_router_should_run_even_if_it_has_no_access_to_update_status__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 56.0s

_sig-cli__oc_rsh_specific_flags_should_work_well_when_access_to_a_remote_shell__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 62.0s

_sig-auth__Feature_OpenShiftAuthorization__RBAC_proxy_for_openshift_authz__RunLegacyEndpointConfirmNoEscalation_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 19.9s

_sig-network__Feature_Router__The_HAProxy_router_should_enable_openshift-monitoring_to_pull_metrics__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 34.2s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_keep_the_deployer_pod_invariant_valid_should_deal_with_config_change_in_case_the_deployment_is_still_running__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 64.0s

_sig-network__Internal_connectivity_for_TCP_and_UDP_on_ports_9000-9999_is_allowed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 40.8s

_sig-builds__Feature_Builds__timing__capture_build_stages_and_durations__should_record_build_stages_and_durations_for_docker__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 68.0s

_sig-cli__oc_adm_must-gather_runs_successfully__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 530.0s

_sig-operator__OLM_should_be_installed_with_packagemanifests_at_version_v1__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-operator__OLM_should_be_installed_with_clusterserviceversions_at_version_v1alpha1__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.8s

_sig-auth__Feature_OpenShiftAuthorization__scopes_TestScopeEscalations_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.6s

_sig-builds__Feature_Builds__webhook__TestWebhookGitHubPing__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 7.0s

_sig-cli__oc_adm_must-gather_runs_successfully_with_options__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 64.0s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_viewing_rollout_history_should_print_the_rollout_history__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 113.0s

_sig-builds__Feature_Builds__timing__capture_build_stages_and_durations__should_record_build_stages_and_durations_for_s2i__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 102.0s

_sig-builds__Feature_Builds__webhook__TestWebhook__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.8s

_sig-builds__Feature_Builds__build_with_empty_source__started_build_should_build_even_with_an_empty_source_in_build_config__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 99.0s

_sig-apps__Feature_OpenShiftControllerManager__TestTriggers_imageChange__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-auth__Feature_UserAPI__groups_should_work__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.4s

_sig-auth__Feature_OpenShiftAuthorization__The_default_cluster_RBAC_policy_should_have_correct_RBAC_rules__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-cli__oc_explain_should_contain_proper_fields_description_for_special_types__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 16.9s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_enhanced_status_should_include_various_info_in_status__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 98.0s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_env_in_params_referencing_the_configmap_should_expand_the_config_map_key_to_a_value__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 107.0s

_sig-cli__oc_observe_works_as_expected__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 11.1s

_sig-devex__Feature_Templates__templateservicebroker_bind_test__should_pass_bind_tests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.8s

Skipped: skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:45]: The template service broker is not installed: services "apiserver" not found
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker bind test
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker bind test
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:29:25.828: INFO: configPath is now "/tmp/configfile659538773"
Oct 13 10:29:25.828: INFO: The user is now "e2e-test-templates-pv8wh-user"
Oct 13 10:29:25.828: INFO: Creating project "e2e-test-templates-pv8wh"
Oct 13 10:29:25.969: INFO: Waiting on permissions in project "e2e-test-templates-pv8wh" ...
Oct 13 10:29:25.977: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:29:26.096: INFO: Waiting for service account "default" secrets (default-dockercfg-8zc9g,default-dockercfg-8zc9g) to include dockercfg/token ...
Oct 13 10:29:26.184: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:29:26.306: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:29:26.429: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:29:26.452: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:29:26.461: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:29:27.030: INFO: Project "e2e-test-templates-pv8wh" has been fully provisioned.
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker bind test
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] 
  github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:41
Oct 13 10:29:27.043: INFO: The template service broker is not installed: services "apiserver" not found
[AfterEach] 
  github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:92
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker bind test
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:29:27.061: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-templates-pv8wh-user}, err: <nil>
Oct 13 10:29:27.075: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-templates-pv8wh}, err: <nil>
Oct 13 10:29:27.100: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~LJHb-wRH9vbVDXKmwL_QfXDFvr49Fs0yVVzQqHD4QPY}, err: <nil>
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker bind test
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-templates-pv8wh" for this suite.
skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:45]: The template service broker is not installed: services "apiserver" not found

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_test_deployments_should_run_a_deployment_to_completion_and_then_scale_to_zero__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 297.0s

_sig-devex__Feature_Templates__templateinstance_creation_with_invalid_object_reports_error__should_report_a_failure_on_creation__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.1s

_sig-devex__Feature_Templates__templateinstance_security_tests__should_pass_security_tests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 9.7s

_sig-devex__Feature_Templates__templateinstance_object_kinds_test_should_create_and_delete_objects_from_varying_API_groups__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.2s

_sig-operator__OLM_should_have_imagePullPolicy_IfNotPresent_on_thier_deployments__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.0s

_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Rolebinding_restrictions_tests_single_project_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.6s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_template.openshift.io/v1,_Resource=templateinstances__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.9s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestSimpleImageChangeBuildTriggerFromImageStreamTagSTIWithConfigChange__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.7s

_sig-network__Feature_Router__The_HAProxy_router_should_override_the_route_host_for_overridden_domains_with_a_custom_value__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 83.0s

_sig-installer__Feature_baremetal__Baremetal_platform_should_have_a_metal3_deployment__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.1s

Skipped: skip [github.com/openshift/origin/test/extended/baremetal/hosts.go:29]: No baremetal platform detected
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:28:32.841: INFO: configPath is now "/tmp/configfile3661483143"
Oct 13 10:28:32.841: INFO: The user is now "e2e-test-baremetal-j8qzb-user"
Oct 13 10:28:32.841: INFO: Creating project "e2e-test-baremetal-j8qzb"
Oct 13 10:28:33.115: INFO: Waiting on permissions in project "e2e-test-baremetal-j8qzb" ...
Oct 13 10:28:33.123: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:28:33.233: INFO: Waiting for service account "default" secrets (default-token-82rj9) to include dockercfg/token ...
Oct 13 10:28:33.341: INFO: Waiting for service account "default" secrets (default-token-82rj9) to include dockercfg/token ...
Oct 13 10:28:33.433: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:28:33.540: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:28:33.651: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:28:33.670: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:28:33.687: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:28:34.223: INFO: Project "e2e-test-baremetal-j8qzb" has been fully provisioned.
[It] have a metal3 deployment [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/baremetal/hosts.go:66
STEP: checking platform type
Oct 13 10:28:34.236: INFO: No baremetal platform detected
[AfterEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:28:34.275: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-baremetal-j8qzb-user}, err: <nil>
Oct 13 10:28:34.322: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-baremetal-j8qzb}, err: <nil>
Oct 13 10:28:34.339: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~zTJbZV24Emz5eddzpB-651pg0xODXGk1w60L1jJnH2g}, err: <nil>
[AfterEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-baremetal-j8qzb" for this suite.
skip [github.com/openshift/origin/test/extended/baremetal/hosts.go:29]: No baremetal platform detected

Stderr
_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_login_URL_for_when_there_is_only_one_IDP__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 66.0s

_sig-auth__Feature_OpenShiftAuthorization__scopes_TestScopedTokens_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.9s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestMultipleImageChangeBuildTriggers__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.7s

_sig-cli__oc_explain_list_uncovered_GroupVersionResources__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.9s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_when_run_iteratively_should_only_deploy_the_last_deployment__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 208.0s

_sig-instrumentation__Prometheus_when_installed_on_the_cluster_shouldn't_have_failing_rules_evaluation__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 80.0s

_sig-network__Feature_Router__The_HAProxy_router_should_expose_a_health_check_on_the_metrics_port__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 144.0s

_sig-cli__oc_debug_ensure_it_works_with_image_streams__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 26.0s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestSimpleImageChangeBuildTriggerFromImageStreamTagSTI__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.3s

_sig-coreos___Conformance__CoreOS_bootimages_TestBootimagesPresent__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 0.4s

_Conformance__sig-api-machinery__Feature_APIServer__local_kubeconfig__lb-int.kubeconfig__should_be_present_on_all_masters_and_work__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 10.5s

_sig-instrumentation__Prometheus_when_installed_on_the_cluster_should_have_important_platform_topology_metrics__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 163.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:454]: Unexpected error:
    <errors.aggregate | len:6, cap:8>: [
        {
            s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]",
        },
        {
            s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!=\"\",label_node_hyperthread_enabled!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ncluster_infrastructure_provider{type!=\"\"}\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ncluster_feature_set\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ncluster_installer{type!=\"\",invoker!=\"\"}\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ninstance:etcd_object_counts:sum > 0\n[]",
        },
    ]
    [promQL query returned unexpected results:
    sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
    [], promQL query returned unexpected results:
    sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
    [], promQL query returned unexpected results:
    cluster_infrastructure_provider{type!=""}
    [], promQL query returned unexpected results:
    cluster_feature_set
    [], promQL query returned unexpected results:
    cluster_installer{type!="",invoker!=""}
    [], promQL query returned unexpected results:
    instance:etcd_object_counts:sum > 0
    []]
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[It] should have important platform topology metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:430
Oct 13 10:27:46.455: INFO: configPath is now "/tmp/configfile3845938436"
Oct 13 10:27:46.455: INFO: The user is now "e2e-test-prometheus-v6dwx-user"
Oct 13 10:27:46.455: INFO: Creating project "e2e-test-prometheus-v6dwx"
Oct 13 10:27:46.581: INFO: Waiting on permissions in project "e2e-test-prometheus-v6dwx" ...
Oct 13 10:27:46.599: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:27:46.720: INFO: Waiting for service account "default" secrets (default-token-ptvlj) to include dockercfg/token ...
Oct 13 10:27:46.818: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:27:46.939: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:27:47.086: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:27:47.127: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:27:47.189: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:27:47.917: INFO: Project "e2e-test-prometheus-v6dwx" has been fully provisioned.
Oct 13 10:27:47.920: INFO: Creating new exec pod
STEP: perform prometheus metric query cluster_feature_set
Oct 13 10:29:22.043: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Oct 13 10:29:22.484: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Oct 13 10:29:22.484: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Oct 13 10:29:22.484: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'
Oct 13 10:29:22.942: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n"
Oct 13 10:29:22.942: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0
Oct 13 10:29:22.942: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"'
Oct 13 10:29:23.355: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n"
Oct 13 10:29:23.355: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:29:23.355: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:29:23.739: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:29:23.739: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:29:23.739: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:29:24.072: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:29:24.072: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Oct 13 10:29:24.072: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Oct 13 10:29:24.431: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Oct 13 10:29:24.431: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Oct 13 10:29:34.432: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Oct 13 10:29:34.843: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Oct 13 10:29:34.843: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_feature_set
Oct 13 10:29:34.844: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Oct 13 10:29:35.233: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Oct 13 10:29:35.233: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Oct 13 10:29:35.233: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'
Oct 13 10:29:35.631: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n"
Oct 13 10:29:35.631: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0
Oct 13 10:29:35.632: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"'
Oct 13 10:29:36.043: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n"
Oct 13 10:29:36.043: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:29:36.043: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:29:36.494: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:29:36.494: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:29:36.494: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:29:36.914: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:29:36.914: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Oct 13 10:29:46.922: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Oct 13 10:29:47.403: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Oct 13 10:29:47.403: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_feature_set
Oct 13 10:29:47.403: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Oct 13 10:29:47.870: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Oct 13 10:29:47.870: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Oct 13 10:29:47.870: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'
Oct 13 10:29:48.235: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n"
Oct 13 10:29:48.235: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0
Oct 13 10:29:48.235: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"'
Oct 13 10:29:48.737: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n"
Oct 13 10:29:48.738: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:29:48.738: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:29:49.181: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:29:49.181: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:29:49.181: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:29:49.812: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:29:49.812: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Oct 13 10:29:59.814: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Oct 13 10:30:00.264: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Oct 13 10:30:00.264: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_feature_set
Oct 13 10:30:00.264: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Oct 13 10:30:00.740: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Oct 13 10:30:00.740: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Oct 13 10:30:00.740: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'
Oct 13 10:30:01.219: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n"
Oct 13 10:30:01.219: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0
Oct 13 10:30:01.219: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"'
Oct 13 10:30:01.672: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n"
Oct 13 10:30:01.672: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:30:01.672: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:30:02.047: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:30:02.047: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:30:02.047: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:30:02.532: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:30:02.532: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_feature_set
Oct 13 10:30:12.537: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Oct 13 10:30:12.890: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Oct 13 10:30:12.890: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Oct 13 10:30:12.891: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'
Oct 13 10:30:13.264: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n"
Oct 13 10:30:13.264: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0
Oct 13 10:30:13.264: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"'
Oct 13 10:30:13.625: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n"
Oct 13 10:30:13.625: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:30:13.625: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:30:14.100: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:30:14.100: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
Oct 13 10:30:14.100: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Oct 13 10:30:14.637: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Oct 13 10:30:14.637: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Oct 13 10:30:14.637: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Oct 13 10:30:15.074: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Oct 13 10:30:15.074: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-v6dwx".
STEP: Found 5 events.
Oct 13 10:30:25.134: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-v6dwx/execpod to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.163.122/23] from kuryr
Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine
Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 10:30:25.144: INFO: POD      NODE                         PHASE    GRACE  CONDITIONS
Oct 13 10:30:25.144: INFO: execpod  ostest-n5rnf-worker-0-j4pkp  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:27:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:29:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:29:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:27:48 +0000 UTC  }]
Oct 13 10:30:25.144: INFO: 
Oct 13 10:30:25.161: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:30:25.208: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-prometheus-v6dwx-user}, err: <nil>
Oct 13 10:30:25.253: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-prometheus-v6dwx}, err: <nil>
Oct 13 10:30:25.299: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~WN9dG42ISAA-HmrjSwS2VqZ6Yu9Y-l2mowFfKgGpBsI}, err: <nil>
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-v6dwx" for this suite.
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:454]: Unexpected error:
    <errors.aggregate | len:6, cap:8>: [
        {
            s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]",
        },
        {
            s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!=\"\",label_node_hyperthread_enabled!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ncluster_infrastructure_provider{type!=\"\"}\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ncluster_feature_set\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ncluster_installer{type!=\"\",invoker!=\"\"}\n[]",
        },
        {
            s: "promQL query returned unexpected results:\ninstance:etcd_object_counts:sum > 0\n[]",
        },
    ]
    [promQL query returned unexpected results:
    sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
    [], promQL query returned unexpected results:
    sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
    [], promQL query returned unexpected results:
    cluster_infrastructure_provider{type!=""}
    [], promQL query returned unexpected results:
    cluster_feature_set
    [], promQL query returned unexpected results:
    cluster_installer{type!="",invoker!=""}
    [], promQL query returned unexpected results:
    instance:etcd_object_counts:sum > 0
    []]
occurred

Stderr
_sig-auth__Feature_OAuthServer__ClientSecretWithPlus_should_create_oauthclient__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.7s

_sig-arch___Conformance__FIPS_TestFIPS__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 4.1s

_sig-imageregistry__Feature_ImageTriggers__Annotation_trigger_reconciles_after_the_image_is_overwritten__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 6.2s

_sig-cluster-lifecycle__CSRs_from_machines_that_are_not_recognized_by_the_cloud_provider_are_not_approved__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 33.5s

_sig-api-machinery__APIServer_CR_fields_validation_additionalCORSAllowedOrigins__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.6s

_sig-auth__Feature_OAuthServer__OAuth_server_has_the_correct_token_and_certificate_fallback_semantics__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.9s

_sig-network__Feature_Router__The_HAProxy_router_converges_when_multiple_routers_are_writing_status__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 114.0s

_sig-auth__Feature_OpenShiftAuthorization__scopes_TestUnknownScopes_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.0s

_sig-builds__Feature_Builds__custom_build_with_buildah__being_created_from_new-build_should_complete_build_with_custom_builder_image__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 198.0s

_sig-operator__OLM_should_be_installed_with_subscriptions_at_version_v1alpha1__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-imageregistry__Feature_Image__oc_tag_should_change_image_reference_for_internal_images__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 79.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_build.openshift.io/v1,_Resource=builds__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-devex__Feature_Templates__templateinstance_impersonation_tests_should_pass_impersonation_update_tests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 9.5s

_Conformance__sig-api-machinery__Feature_APIServer__local_kubeconfig__localhost-recovery.kubeconfig__should_be_present_on_all_masters_and_work__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 13.0s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_grant_URL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 44.5s

_sig-builds__Feature_Builds__result_image_should_have_proper_labels_set__S2I_build_from_a_template_should_create_a_image_from__test-s2i-build.json__template_with_proper_Docker_labels__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 84.0s

_sig-apps__Feature_OpenShiftControllerManager__TestDeploymentConfigDefaults__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-cli__oc_builds_complex_build_webhooks_CRUD__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 6.3s

_sig-devex__Feature_OpenShiftControllerManager__TestAutomaticCreationOfPullSecrets__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.1s

_sig-builds__Feature_Builds__Optimized_image_builds__should_succeed__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 78.0s

_sig-cli__oc_explain_should_contain_proper_spec+status_for_CRDs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 31.1s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_when_changing_image_change_trigger_should_successfully_trigger_from_an_updated_image__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 74.0s

_sig-auth__Feature_UserAPI__users_can_manipulate_groups__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.0s

_sig-auth__Feature_OpenShiftAuthorization__RBAC_proxy_for_openshift_authz__RunLegacyLocalRoleEndpoint_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.3s

_sig-cli__oc_adm_role-selectors__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.3s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_custom_deployments_should_run_the_custom_deployment_steps__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 96.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_template.openshift.io/v1,_Resource=templates__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_build.openshift.io/v1,_Resource=buildconfigs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-devex__Feature_Templates__templateinstance_readiness_test__should_report_ready_soon_after_all_annotated_objects_are_ready__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 125.0s

_sig-api-machinery__Feature_ResourceQuota__Object_count_should_properly_count_the_number_of_imagestreams_resources__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.6s

_sig-network-edge__Conformance__Area_Networking__Feature_Router__The_HAProxy_router_should_pass_the_http2_tests__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 3.0s

Skipped: skip [github.com/openshift/origin/test/extended/router/http2.go:100]: Skip on platforms where the default router is not exposed by a load balancer service.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:25:17.708: INFO: configPath is now "/tmp/configfile1979282045"
Oct 13 10:25:17.708: INFO: The user is now "e2e-test-router-http2-7mvp9-user"
Oct 13 10:25:17.708: INFO: Creating project "e2e-test-router-http2-7mvp9"
Oct 13 10:25:18.291: INFO: Waiting on permissions in project "e2e-test-router-http2-7mvp9" ...
Oct 13 10:25:18.298: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:25:18.407: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:25:18.534: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:25:18.649: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:25:18.663: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:25:18.675: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:25:19.288: INFO: Project "e2e-test-router-http2-7mvp9" has been fully provisioned.
[It] should pass the http2 tests [Suite:openshift/conformance/parallel/minimal]
  github.com/openshift/origin/test/extended/router/http2.go:90
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:25:19.404: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-router-http2-7mvp9-user}, err: <nil>
Oct 13 10:25:19.582: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-router-http2-7mvp9}, err: <nil>
Oct 13 10:25:19.747: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~ItxIv7v9gNLFZ0KzvDPN8pw_KSjYB_6bGNBfbfwZ1MA}, err: <nil>
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-router-http2-7mvp9" for this suite.
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/router/http2.go:73
skip [github.com/openshift/origin/test/extended/router/http2.go:100]: Skip on platforms where the default router is not exposed by a load balancer service.

Stderr
_sig-network-edge__Conformance__Area_Networking__Feature_Router__The_HAProxy_router_should_pass_the_gRPC_interoperability_tests__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 2.0s

Skipped: skip [github.com/openshift/origin/test/extended/router/grpc-interop.go:57]: Skip on platforms where the default router is not exposed by a load balancer service.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:25:15.577: INFO: configPath is now "/tmp/configfile1585300186"
Oct 13 10:25:15.577: INFO: The user is now "e2e-test-grpc-interop-pfbzs-user"
Oct 13 10:25:15.577: INFO: Creating project "e2e-test-grpc-interop-pfbzs"
Oct 13 10:25:15.718: INFO: Waiting on permissions in project "e2e-test-grpc-interop-pfbzs" ...
Oct 13 10:25:15.730: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:25:15.847: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:25:15.942: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:25:16.054: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:25:16.168: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:25:16.176: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:25:16.199: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:25:16.826: INFO: Project "e2e-test-grpc-interop-pfbzs" has been fully provisioned.
[It] should pass the gRPC interoperability tests [Suite:openshift/conformance/parallel/minimal]
  github.com/openshift/origin/test/extended/router/grpc-interop.go:47
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:25:16.867: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-grpc-interop-pfbzs-user}, err: <nil>
Oct 13 10:25:16.885: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-grpc-interop-pfbzs}, err: <nil>
Oct 13 10:25:16.925: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~trd4xNLHo7L1y4mpZAgagm_tFpmqEJe1km9bJ5CpwYI}, err: <nil>
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-grpc-interop-pfbzs" for this suite.
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/router/grpc-interop.go:36
skip [github.com/openshift/origin/test/extended/router/grpc-interop.go:57]: Skip on platforms where the default router is not exposed by a load balancer service.

Stderr
_sig-network__multicast_when_using_one_of_the_OpenshiftSDN_modes_'redhat/openshift-ovs-multitenant,_redhat/openshift-ovs-networkpolicy'_should_block_multicast_traffic_in_namespaces_where_it_is_disabled__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.3s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:25:13.260: INFO: configPath is now "/tmp/configfile1397709763"
Oct 13 10:25:13.260: INFO: The user is now "e2e-test-multicast-dxfdl-user"
Oct 13 10:25:13.260: INFO: Creating project "e2e-test-multicast-dxfdl"
Oct 13 10:25:13.465: INFO: Waiting on permissions in project "e2e-test-multicast-dxfdl" ...
Oct 13 10:25:13.474: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:25:13.603: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:25:13.712: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:25:13.821: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:25:13.829: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:25:13.844: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:25:14.469: INFO: Project "e2e-test-multicast-dxfdl" has been fully provisioned.
[BeforeEach] when using one of the OpenshiftSDN modes 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy'
  github.com/openshift/origin/test/extended/networking/util.go:375
Oct 13 10:25:14.868: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:25:14.868: INFO: Not using one of the specified OpenshiftSDN modes
[AfterEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:25:14.907: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-multicast-dxfdl-user}, err: <nil>
Oct 13 10:25:14.939: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-multicast-dxfdl}, err: <nil>
Oct 13 10:25:14.957: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~UcXkSAzTQZ6CeIwXUDnQsRyCzAaY57AVCiyDPHFg6kg}, err: <nil>
[AfterEach] [sig-network] multicast
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-multicast-dxfdl" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes

Stderr
_sig-network__Feature_Router__The_HAProxy_router_should_expose_the_profiling_endpoints__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 69.0s

_sig-imageregistry__Feature_ImageLookup__Image_policy_should_update_standard_Kube_object_image_fields_when_local_names_are_on__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 8.3s

_sig-builds__Feature_Builds__verify_/run_filesystem_contents__do_not_have_unexpected_content_using_a_simple_Docker_Strategy_Build__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 94.0s

_sig-arch__Managed_cluster_should_ensure_platform_components_have_system-__priority_class_associated__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

Failed:
flake: Workloads with outstanding bugs:
Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866
Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892
Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868
Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] ensure platform components have system-* priority class associated [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/pods/priorityclasses.go:20
Oct 13 10:24:59.354: INFO: Workloads with outstanding bugs:
Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866
Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892
Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868
Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
Oct 13 10:24:59.354: INFO: Workloads with outstanding bugs:
Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866
Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892
Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868
Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
[AfterEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-arch] Managed cluster should
  github.com/openshift/origin/test/extended/util/client.go:141
flake: Workloads with outstanding bugs:
Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866
Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892
Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868
Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870

Stderr
_sig-arch__Managed_cluster_should_ensure_platform_components_have_system-__priority_class_associated__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-arch__Managed_cluster_should_have_operators_on_the_cluster_version__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestSimpleImageChangeBuildTriggerFromImageStreamTagDockerWithConfigChange__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.8s

_sig-operator__OLM_should_be_installed_with_operatorgroups_at_version_v1__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-auth__Feature_ProjectAPI___TestUnprivilegedNewProject__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.3s

_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Create_a_rolebinding_when_there_are_no_restrictions_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.1s

_sig-cli__oc_adm_who-can__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.7s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_generation_should_deploy_based_on_a_status_version_bump__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 120.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_oauth.openshift.io/v1,_Resource=oauthclientauthorizations__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.4s

_sig-cluster-lifecycle__TestAdminAck_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_should_respect_image_stream_tag_reference_policy_resolve_the_image_pull_spec__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.8s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_route.openshift.io/v1,_Resource=routes__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.3s

_sig-apps__Feature_OpenShiftControllerManager__TestTriggers_imageChange_nonAutomatic__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 42.0s

_sig-node__should_override_timeoutGracePeriodSeconds_when_annotation_is_set__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 61.0s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_isolates_namespaces_by_default_should_prevent_communication_between_pods_in_different_namespaces_on_different_nodes__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.9s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:23:36.893: INFO: configPath is now "/tmp/configfile2438031237"
Oct 13 10:23:36.894: INFO: The user is now "e2e-test-ns-global-58rkb-user"
Oct 13 10:23:36.894: INFO: Creating project "e2e-test-ns-global-58rkb"
Oct 13 10:23:37.057: INFO: Waiting on permissions in project "e2e-test-ns-global-58rkb" ...
Oct 13 10:23:37.065: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:23:37.172: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:23:37.280: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:23:37.387: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:23:37.395: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:23:37.401: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:23:37.950: INFO: Project "e2e-test-ns-global-58rkb" has been fully provisioned.
[BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  github.com/openshift/origin/test/extended/networking/util.go:350
Oct 13 10:23:38.223: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:23:38.223: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:23:38.256: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-ns-global-58rkb-user}, err: <nil>
Oct 13 10:23:38.276: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-ns-global-58rkb}, err: <nil>
Oct 13 10:23:38.294: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~N8KNr1-U5uwIpC27LyIVUUML1Gs6YTtOEbinbetFT3w}, err: <nil>
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-ns-global-58rkb" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.

Stderr
_sig-arch__Managed_cluster_should_ensure_control_plane_operators_do_not_make_themselves_unevictable__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-arch__Cluster_topology_single_node_tests_Verify_that_OpenShift_components_deploy_one_replica_in_SingleReplica_topology_mode__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

Skipped: skip [github.com/openshift/origin/test/extended/single_node/topology.go:138]: Test is only relevant for single replica topologies
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch] Cluster topology single node tests
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename single-node
W1013 10:23:35.866248   95025 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:23:35.866: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] Verify that OpenShift components deploy one replica in SingleReplica topology mode [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/single_node/topology.go:134
Oct 13 10:23:35.884: INFO: Test is only relevant for single replica topologies
[AfterEach] [sig-arch] Cluster topology single node tests
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-single-node-5693" for this suite.
skip [github.com/openshift/origin/test/extended/single_node/topology.go:138]: Test is only relevant for single replica topologies

Stderr
_sig-builds__Feature_Builds__verify_/run_filesystem_contents__are_writeable_using_a_simple_Docker_Strategy_Build__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 149.0s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_isolates_namespaces_by_default_should_allow_communication_from_non-default_to_default_namespace_on_a_different_node__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:23:33.508: INFO: configPath is now "/tmp/configfile1661420397"
Oct 13 10:23:33.508: INFO: The user is now "e2e-test-ns-global-49cmq-user"
Oct 13 10:23:33.508: INFO: Creating project "e2e-test-ns-global-49cmq"
Oct 13 10:23:33.799: INFO: Waiting on permissions in project "e2e-test-ns-global-49cmq" ...
Oct 13 10:23:33.812: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:23:33.924: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:23:34.036: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:23:34.144: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:23:34.157: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:23:34.167: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:23:34.903: INFO: Project "e2e-test-ns-global-49cmq" has been fully provisioned.
[BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  github.com/openshift/origin/test/extended/networking/util.go:350
Oct 13 10:23:35.192: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:23:35.192: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:23:35.239: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-ns-global-49cmq-user}, err: <nil>
Oct 13 10:23:35.282: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-ns-global-49cmq}, err: <nil>
Oct 13 10:23:35.306: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~FfkaJk3AfoZzzQdqwaieRzbbtVXXZL00CX0BexLKXeg}, err: <nil>
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-ns-global-49cmq" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_multiple_image_change_triggers_should_run_a_successful_deployment_with_a_trigger_used_by_different_containers__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 97.0s

Failed:
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:561]: Unexpected error:
    <*errors.errorString | 0xc00216a8f0>: {
        s: "deployment e2e-test-cli-deployment-dcz78/example-1 failed",
    }
    deployment e2e-test-cli-deployment-dcz78/example-1 failed
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:23:31.733: INFO: configPath is now "/tmp/configfile4161650962"
Oct 13 10:23:31.733: INFO: The user is now "e2e-test-cli-deployment-dcz78-user"
Oct 13 10:23:31.733: INFO: Creating project "e2e-test-cli-deployment-dcz78"
Oct 13 10:23:32.006: INFO: Waiting on permissions in project "e2e-test-cli-deployment-dcz78" ...
Oct 13 10:23:32.018: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:23:32.127: INFO: Waiting for service account "default" secrets (default-token-vlst4) to include dockercfg/token ...
Oct 13 10:23:32.233: INFO: Waiting for service account "default" secrets (default-token-vlst4) to include dockercfg/token ...
Oct 13 10:23:32.333: INFO: Waiting for service account "default" secrets (default-token-vlst4) to include dockercfg/token ...
Oct 13 10:23:32.437: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:23:32.548: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:23:32.654: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:23:32.662: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:23:32.703: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:23:33.514: INFO: Project "e2e-test-cli-deployment-dcz78" has been fully provisioned.
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[JustBeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/deployments/deployments.go:52
[It] should run a successful deployment with a trigger used by different containers [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/deployments/deployments.go:555
STEP: verifying the deployment is marked complete
[AfterEach] with multiple image change triggers
  github.com/openshift/origin/test/extended/deployments/deployments.go:542
Oct 13 10:25:05.557: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get dc/example -o yaml'
Oct 13 10:25:05.672: INFO: 
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  creationTimestamp: "2022-10-13T10:23:33Z"
  generation: 2
  labels:
    app: example
  name: example
  namespace: e2e-test-cli-deployment-dcz78
  resourceVersion: "955389"
  uid: 686b7d28-7a36-497c-8565-b485e4ac0c07
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    app: example
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: example
    spec:
      containers:
      - command:
        - /bin/sleep
        - "100"
        image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985
        imagePullPolicy: IfNotPresent
        name: ruby
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - command:
        - /bin/sleep
        - "100"
        image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985
        imagePullPolicy: IfNotPresent
        name: ruby2
        ports:
        - containerPort: 8081
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
  test: false
  triggers:
  - type: ConfigChange
  - imageChangeParams:
      automatic: true
      containerNames:
      - ruby
      - ruby2
      from:
        kind: ImageStreamTag
        name: ruby:latest
        namespace: openshift
      lastTriggeredImage: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985
    type: ImageChange
status:
  availableReplicas: 0
  conditions:
  - lastTransitionTime: "2022-10-13T10:23:33Z"
    lastUpdateTime: "2022-10-13T10:23:33Z"
    message: Deployment config does not have minimum availability.
    status: "False"
    type: Available
  - lastTransitionTime: "2022-10-13T10:25:05Z"
    lastUpdateTime: "2022-10-13T10:25:05Z"
    message: replication controller "example-1" has failed progressing
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  details:
    causes:
    - type: ConfigChange
    message: config change
  latestVersion: 1
  observedGeneration: 2
  replicas: 0
  unavailableReplicas: 0
  updatedReplicas: 0

Oct 13 10:25:05.715: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get rc/example-1 -o yaml'
Oct 13 10:25:05.878: INFO: 
apiVersion: v1
kind: ReplicationController
metadata:
  annotations:
    kubectl.kubernetes.io/desired-replicas: "1"
    openshift.io/deployer-pod.completed-at: 2022-10-13 10:25:02 +0000 UTC
    openshift.io/deployer-pod.created-at: 2022-10-13 10:23:34 +0000 UTC
    openshift.io/deployer-pod.name: example-1-deploy
    openshift.io/deployment-config.latest-version: "1"
    openshift.io/deployment-config.name: example
    openshift.io/deployment.phase: Failed
    openshift.io/deployment.replicas: "0"
    openshift.io/deployment.status-reason: config change
    openshift.io/encoded-deployment-config: |
      {"kind":"DeploymentConfig","apiVersion":"apps.openshift.io/v1","metadata":{"name":"example","namespace":"e2e-test-cli-deployment-dcz78","uid":"686b7d28-7a36-497c-8565-b485e4ac0c07","resourceVersion":"952925","generation":2,"creationTimestamp":"2022-10-13T10:23:33Z","labels":{"app":"example"},"managedFields":[{"manager":"openshift-tests","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{}},"f:strategy":{"f:activeDeadlineSeconds":{},"f:rollingParams":{".":{},"f:intervalSeconds":{},"f:maxSurge":{},"f:maxUnavailable":{},"f:timeoutSeconds":{},"f:updatePeriodSeconds":{}},"f:type":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"ruby\"}":{".":{},"f:command":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8080,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"ruby2\"}":{".":{},"f:command":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8081,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}},{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"ruby\"}":{"f:image":{}},"k:{\"name\":\"ruby2\"}":{"f:image":{}}}}},"f:triggers":{}}}},{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:status":{},"f:type":{}}},"f:details":{".":{},"f:causes":{},"f:message":{}},"f:latestVersion":{},"f:observedGeneration":{}}},"subresource":"status"}]},"spec":{"strategy":{"type":"Rolling","rollingParams":{"updatePeriodSeconds":1,"intervalSeconds":1,"timeoutSeconds":600,"maxUnavailable":"25%","maxSurge":"25%"},"resources":{},"activeDeadlineSeconds":21600},"triggers":[{"type":"ConfigChange"},{"type":"ImageChange","imageChangeParams":{"automatic":true,"containerNames":["ruby","ruby2"],"from":{"kind":"ImageStreamTag","namespace":"openshift","name":"ruby:latest"},"lastTriggeredImage":"image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985"}}],"replicas":1,"revisionHistoryLimit":10,"test":false,"selector":{"app":"example"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"example"}},"spec":{"containers":[{"name":"ruby","image":"image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985","command":["/bin/sleep","100"],"ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"ruby2","image":"image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985","command":["/bin/sleep","100"],"ports":[{"containerPort":8081,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{"latestVersion":1,"observedGeneration":1,"replicas":0,"updatedReplicas":0,"availableReplicas":0,"unavailableReplicas":0,"details":{"message":"config change","causes":[{"type":"ConfigChange"}]},"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2022-10-13T10:23:33Z","lastTransitionTime":"2022-10-13T10:23:33Z","message":"Deployment config does not have minimum availability."}]}}
  creationTimestamp: "2022-10-13T10:23:34Z"
  generation: 1
  labels:
    app: example
    openshift.io/deployment-config.name: example
  name: example-1
  namespace: e2e-test-cli-deployment-dcz78
  ownerReferences:
  - apiVersion: apps.openshift.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: DeploymentConfig
    name: example
    uid: 686b7d28-7a36-497c-8565-b485e4ac0c07
  resourceVersion: "955387"
  uid: a63b3f6d-82a3-4f94-a1b2-99e358591507
spec:
  replicas: 0
  selector:
    app: example
    deployment: example-1
    deploymentconfig: example
  template:
    metadata:
      annotations:
        openshift.io/deployment-config.latest-version: "1"
        openshift.io/deployment-config.name: example
        openshift.io/deployment.name: example-1
      creationTimestamp: null
      labels:
        app: example
        deployment: example-1
        deploymentconfig: example
    spec:
      containers:
      - command:
        - /bin/sleep
        - "100"
        image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985
        imagePullPolicy: IfNotPresent
        name: ruby
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - command:
        - /bin/sleep
        - "100"
        image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985
        imagePullPolicy: IfNotPresent
        name: ruby2
        ports:
        - containerPort: 8081
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  observedGeneration: 1
  replicas: 0

Oct 13 10:25:05.878: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get pod/example-1-deploy -o yaml'
Oct 13 10:25:06.054: INFO: 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "kuryr",
          "interface": "eth0",
          "ips": [
              "10.128.198.168"
          ],
          "mac": "fa:16:3e:71:ea:6b",
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "kuryr",
          "interface": "eth0",
          "ips": [
              "10.128.198.168"
          ],
          "mac": "fa:16:3e:71:ea:6b",
          "default": true,
          "dns": {}
      }]
    openshift.io/deployment-config.name: example
    openshift.io/deployment.name: example-1
    openshift.io/scc: restricted
  creationTimestamp: "2022-10-13T10:23:34Z"
  finalizers:
  - kuryr.openstack.org/pod-finalizer
  labels:
    openshift.io/deployer-pod-for.name: example-1
  name: example-1-deploy
  namespace: e2e-test-cli-deployment-dcz78
  ownerReferences:
  - apiVersion: v1
    kind: ReplicationController
    name: example-1
    uid: a63b3f6d-82a3-4f94-a1b2-99e358591507
  resourceVersion: "955384"
  uid: 1ba3bac1-d508-4b6f-9961-4685ab6e9ef4
spec:
  activeDeadlineSeconds: 21600
  containers:
  - env:
    - name: OPENSHIFT_DEPLOYMENT_NAME
      value: example-1
    - name: OPENSHIFT_DEPLOYMENT_NAMESPACE
      value: e2e-test-cli-deployment-dcz78
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902
    imagePullPolicy: IfNotPresent
    name: deployment
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      runAsUser: 1012610000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-7x56n
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: deployer-dockercfg-zdckm
  nodeName: ostest-n5rnf-worker-0-94fxs
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1012610000
    seLinuxOptions:
      level: s0:c112,c89
  serviceAccount: deployer
  serviceAccountName: deployer
  shareProcessNamespace: false
  terminationGracePeriodSeconds: 10
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-7x56n
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:23:34Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:25:03Z"
    message: 'containers with unready status: [deployment]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:25:03Z"
    message: 'containers with unready status: [deployment]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:23:34Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://72e560b2832c1ecfae8fe8d621b0b541a9d4634cd4b300977f86b0f9c09102b6
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902
    imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902
    lastState: {}
    name: deployment
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: cri-o://72e560b2832c1ecfae8fe8d621b0b541a9d4634cd4b300977f86b0f9c09102b6
        exitCode: 1
        finishedAt: "2022-10-13T10:25:02Z"
        reason: Error
        startedAt: "2022-10-13T10:24:32Z"
  hostIP: 10.196.2.169
  phase: Failed
  podIP: 10.128.198.168
  podIPs:
  - ip: 10.128.198.168
  qosClass: BestEffort
  startTime: "2022-10-13T10:23:34Z"

Oct 13 10:25:06.054: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 logs pod/example-1-deploy --timestamps=true'
Oct 13 10:25:06.292: INFO: --- pod example-1-deploy logs
2022-10-13T10:25:02.394050992Z error: couldn't get deployment example-1: Get "https://172.30.0.1:443/api/v1/namespaces/e2e-test-cli-deployment-dcz78/replicationcontrollers/example-1": dial tcp 172.30.0.1:443: i/o timeout---

Oct 13 10:25:06.292: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get istag -o wide'
Oct 13 10:25:06.442: INFO: 
No resources found in e2e-test-cli-deployment-dcz78 namespace.

[AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/deployments/deployments.go:71
[AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-cli-deployment-dcz78".
STEP: Found 6 events.
Oct 13 10:25:08.459: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for example-1-deploy: { } Scheduled: Successfully assigned e2e-test-cli-deployment-dcz78/example-1-deploy to ostest-n5rnf-worker-0-94fxs
Oct 13 10:25:08.459: INFO: At 2022-10-13 10:23:34 +0000 UTC - event for example: {deploymentconfig-controller } DeploymentCreated: Created new replication controller "example-1" for version 1
Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:26 +0000 UTC - event for example-1-deploy: {multus } AddedInterface: Add eth0 [10.128.198.168/23] from kuryr
Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:26 +0000 UTC - event for example-1-deploy: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902" already present on machine
Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:32 +0000 UTC - event for example-1-deploy: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container deployment
Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:32 +0000 UTC - event for example-1-deploy: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container deployment
Oct 13 10:25:08.466: INFO: POD               NODE                         PHASE   GRACE  CONDITIONS
Oct 13 10:25:08.466: INFO: example-1-deploy  ostest-n5rnf-worker-0-94fxs  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:25:03 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:25:03 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:34 +0000 UTC  }]
Oct 13 10:25:08.466: INFO: 
Oct 13 10:25:08.473: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:25:08.511: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-cli-deployment-dcz78-user}, err: <nil>
Oct 13 10:25:08.541: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-cli-deployment-dcz78}, err: <nil>
Oct 13 10:25:08.579: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~G6LEfjXpdPKWJarx_XHGLmpW3SVdyff5o2QDe-E5SLk}, err: <nil>
[AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-cli-deployment-dcz78" for this suite.
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:561]: Unexpected error:
    <*errors.errorString | 0xc00216a8f0>: {
        s: "deployment e2e-test-cli-deployment-dcz78/example-1 failed",
    }
    deployment e2e-test-cli-deployment-dcz78/example-1 failed
occurred

Stderr
_sig-devex__Feature_Templates__templateservicebroker_end-to-end_test__should_pass_an_end-to-end_test__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.9s

Skipped: skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:57]: The template service broker is not installed: services "apiserver" not found
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:23:29.821: INFO: configPath is now "/tmp/configfile3311815544"
Oct 13 10:23:29.822: INFO: The user is now "e2e-test-templates-4kzs9-user"
Oct 13 10:23:29.822: INFO: Creating project "e2e-test-templates-4kzs9"
Oct 13 10:23:29.997: INFO: Waiting on permissions in project "e2e-test-templates-4kzs9" ...
Oct 13 10:23:30.004: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:23:30.115: INFO: Waiting for service account "default" secrets (default-dockercfg-7r27l,default-dockercfg-7r27l) to include dockercfg/token ...
Oct 13 10:23:30.223: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:23:30.330: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:23:30.437: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:23:30.444: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:23:30.458: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:23:31.103: INFO: Project "e2e-test-templates-4kzs9" has been fully provisioned.
[JustBeforeEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test
  github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:53
Oct 13 10:23:31.112: INFO: The template service broker is not installed: services "apiserver" not found
[AfterEach] 
  github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:346
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:23:31.137: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-templates-4kzs9-user}, err: <nil>
Oct 13 10:23:31.169: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-templates-4kzs9}, err: <nil>
Oct 13 10:23:31.185: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~W77J_cG9mxOJuMh3UnMTgu1c--CMvlwpJ6nPc-uZHKU}, err: <nil>
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-templates-4kzs9" for this suite.
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test
  github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:99
skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:57]: The template service broker is not installed: services "apiserver" not found

Stderr
_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_apps.openshift.io/v1,_Resource=deploymentconfigs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.3s

_sig-devex__Feature_Templates__templateinstance_impersonation_tests_should_pass_impersonation_deletion_tests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 8.6s

_sig-auth__Feature_LDAP__LDAP_should_start_an_OpenLDAP_test_server__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 256.0s

_sig-network__Feature_Router__The_HAProxy_router_should_set_Forwarded_headers_appropriately__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.9s

Skipped: skip [github.com/openshift/origin/test/extended/router/headers.go:60]: BZ 1772125 -- not verified on platform type "OpenStack"
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:23:21.014: INFO: configPath is now "/tmp/configfile4260852904"
Oct 13 10:23:21.015: INFO: The user is now "e2e-test-router-headers-v7kl9-user"
Oct 13 10:23:21.015: INFO: Creating project "e2e-test-router-headers-v7kl9"
Oct 13 10:23:21.242: INFO: Waiting on permissions in project "e2e-test-router-headers-v7kl9" ...
Oct 13 10:23:21.247: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:23:21.361: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:23:21.475: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:23:21.597: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:23:21.605: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:23:21.611: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:23:22.138: INFO: Project "e2e-test-router-headers-v7kl9" has been fully provisioned.
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/router/headers.go:35
[It] should set Forwarded headers appropriately [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/router/headers.go:48
[AfterEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:23:22.341: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-router-headers-v7kl9-user}, err: <nil>
Oct 13 10:23:22.355: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-router-headers-v7kl9}, err: <nil>
Oct 13 10:23:22.368: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~TqzvAuBmhFUClSuQ6ze87URcnOQdOraoAS9Zksmlf0A}, err: <nil>
[AfterEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-router-headers-v7kl9" for this suite.
skip [github.com/openshift/origin/test/extended/router/headers.go:60]: BZ 1772125 -- not verified on platform type "OpenStack"

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_when_tagging_images_should_successfully_tag_the_deployed_image__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 63.0s

Failed:
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:481]: Unexpected error:
    <*errors.errorString | 0xc001d76e70>: {
        s: "deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed",
    }
    deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:22:55.311: INFO: configPath is now "/tmp/configfile3297171011"
Oct 13 10:22:55.311: INFO: The user is now "e2e-test-cli-deployment-rxkqx-user"
Oct 13 10:22:55.311: INFO: Creating project "e2e-test-cli-deployment-rxkqx"
Oct 13 10:22:55.423: INFO: Waiting on permissions in project "e2e-test-cli-deployment-rxkqx" ...
Oct 13 10:22:55.433: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:22:55.562: INFO: Waiting for service account "default" secrets (default-token-98p2s) to include dockercfg/token ...
Oct 13 10:22:55.650: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:22:55.770: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:22:55.892: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:22:55.904: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:22:55.918: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:22:56.648: INFO: Project "e2e-test-cli-deployment-rxkqx" has been fully provisioned.
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/framework.go:1453
[JustBeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/deployments/deployments.go:52
[It] should successfully tag the deployed image [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/deployments/deployments.go:474
STEP: creating the deployment config fixture
STEP: verifying the deployment is marked complete
[AfterEach] when tagging images
  github.com/openshift/origin/test/extended/deployments/deployments.go:470
Oct 13 10:23:54.674: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get dc/tag-images -o yaml'
Oct 13 10:23:54.866: INFO: 
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  creationTimestamp: "2022-10-13T10:22:56Z"
  generation: 1
  name: tag-images
  namespace: e2e-test-cli-deployment-rxkqx
  resourceVersion: "953528"
  uid: dc3129c1-ed5d-4a82-9435-9ae94f3c1de3
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    name: tag-images
  strategy:
    activeDeadlineSeconds: 21600
    recreateParams:
      post:
        failurePolicy: Abort
        tagImages:
        - containerName: sample-name
          to:
            kind: ImageStreamTag
            name: sample-stream:deployed
      timeoutSeconds: 600
    resources: {}
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: tag-images
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - sleep 300
        image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest
        imagePullPolicy: IfNotPresent
        name: sample-name
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 3Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 1
  test: true
  triggers:
  - type: ConfigChange
status:
  availableReplicas: 0
  conditions:
  - lastTransitionTime: "2022-10-13T10:22:56Z"
    lastUpdateTime: "2022-10-13T10:22:56Z"
    message: Deployment config does not have minimum availability.
    status: "False"
    type: Available
  - lastTransitionTime: "2022-10-13T10:23:54Z"
    lastUpdateTime: "2022-10-13T10:23:54Z"
    message: replication controller "tag-images-1" has failed progressing
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  details:
    causes:
    - type: ConfigChange
    message: config change
  latestVersion: 1
  observedGeneration: 1
  replicas: 0
  unavailableReplicas: 0
  updatedReplicas: 0

Oct 13 10:23:54.890: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get rc/tag-images-1 -o yaml'
Oct 13 10:23:55.022: INFO: 
apiVersion: v1
kind: ReplicationController
metadata:
  annotations:
    kubectl.kubernetes.io/desired-replicas: "1"
    openshift.io/deployer-pod.completed-at: 2022-10-13 10:23:52 +0000 UTC
    openshift.io/deployer-pod.created-at: 2022-10-13 10:22:56 +0000 UTC
    openshift.io/deployer-pod.name: tag-images-1-deploy
    openshift.io/deployment-config.latest-version: "1"
    openshift.io/deployment-config.name: tag-images
    openshift.io/deployment.phase: Failed
    openshift.io/deployment.replicas: "0"
    openshift.io/deployment.status-reason: config change
    openshift.io/encoded-deployment-config: |
      {"kind":"DeploymentConfig","apiVersion":"apps.openshift.io/v1","metadata":{"name":"tag-images","namespace":"e2e-test-cli-deployment-rxkqx","uid":"dc3129c1-ed5d-4a82-9435-9ae94f3c1de3","resourceVersion":"951669","generation":1,"creationTimestamp":"2022-10-13T10:22:56Z","managedFields":[{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:22:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:details":{".":{},"f:causes":{},"f:message":{}},"f:latestVersion":{}}},"subresource":"status"},{"manager":"openshift-tests","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:22:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:name":{}},"f:strategy":{"f:activeDeadlineSeconds":{},"f:recreateParams":{".":{},"f:post":{".":{},"f:failurePolicy":{},"f:tagImages":{}},"f:timeoutSeconds":{}},"f:type":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:name":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"sample-name\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8080,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"f:test":{},"f:triggers":{}}}}]},"spec":{"strategy":{"type":"Recreate","recreateParams":{"timeoutSeconds":600,"post":{"failurePolicy":"Abort","tagImages":[{"containerName":"sample-name","to":{"kind":"ImageStreamTag","name":"sample-stream:deployed"}}]}},"resources":{},"activeDeadlineSeconds":21600},"triggers":[{"type":"ConfigChange"}],"replicas":1,"revisionHistoryLimit":10,"test":true,"selector":{"name":"tag-images"},"template":{"metadata":{"creationTimestamp":null,"labels":{"name":"tag-images"}},"spec":{"containers":[{"name":"sample-name","image":"image-registry.openshift-image-registry.svc:5000/openshift/tools:latest","command":["/bin/sh","-c","sleep 300"],"ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{"limits":{"cpu":"100m","memory":"3Gi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":1,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{"latestVersion":1,"observedGeneration":0,"replicas":0,"updatedReplicas":0,"availableReplicas":0,"unavailableReplicas":0,"details":{"message":"config change","causes":[{"type":"ConfigChange"}]}}}
  creationTimestamp: "2022-10-13T10:22:56Z"
  generation: 1
  labels:
    openshift.io/deployment-config.name: tag-images
  name: tag-images-1
  namespace: e2e-test-cli-deployment-rxkqx
  ownerReferences:
  - apiVersion: apps.openshift.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: DeploymentConfig
    name: tag-images
    uid: dc3129c1-ed5d-4a82-9435-9ae94f3c1de3
  resourceVersion: "953526"
  uid: 65d98a2d-5024-4cdf-af3b-a38005c46590
spec:
  replicas: 0
  selector:
    deployment: tag-images-1
    deploymentconfig: tag-images
    name: tag-images
  template:
    metadata:
      annotations:
        openshift.io/deployment-config.latest-version: "1"
        openshift.io/deployment-config.name: tag-images
        openshift.io/deployment.name: tag-images-1
      creationTimestamp: null
      labels:
        deployment: tag-images-1
        deploymentconfig: tag-images
        name: tag-images
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - sleep 300
        image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest
        imagePullPolicy: IfNotPresent
        name: sample-name
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 3Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 1
status:
  observedGeneration: 1
  replicas: 0

Oct 13 10:23:55.022: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get pod/tag-images-1-deploy -o yaml'
Oct 13 10:23:55.124: INFO: 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "kuryr",
          "interface": "eth0",
          "ips": [
              "10.128.221.50"
          ],
          "mac": "fa:16:3e:02:b8:60",
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "kuryr",
          "interface": "eth0",
          "ips": [
              "10.128.221.50"
          ],
          "mac": "fa:16:3e:02:b8:60",
          "default": true,
          "dns": {}
      }]
    openshift.io/deployment-config.name: tag-images
    openshift.io/deployment.name: tag-images-1
    openshift.io/scc: restricted
  creationTimestamp: "2022-10-13T10:22:56Z"
  finalizers:
  - kuryr.openstack.org/pod-finalizer
  labels:
    openshift.io/deployer-pod-for.name: tag-images-1
  name: tag-images-1-deploy
  namespace: e2e-test-cli-deployment-rxkqx
  ownerReferences:
  - apiVersion: v1
    kind: ReplicationController
    name: tag-images-1
    uid: 65d98a2d-5024-4cdf-af3b-a38005c46590
  resourceVersion: "953525"
  uid: feb6640d-b1c9-4424-a87c-df4f0f84043b
spec:
  activeDeadlineSeconds: 21600
  containers:
  - env:
    - name: OPENSHIFT_DEPLOYMENT_NAME
      value: tag-images-1
    - name: OPENSHIFT_DEPLOYMENT_NAMESPACE
      value: e2e-test-cli-deployment-rxkqx
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902
    imagePullPolicy: IfNotPresent
    name: deployment
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      runAsUser: 1012510000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-mtlb2
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: deployer-dockercfg-gfwfd
  nodeName: ostest-n5rnf-worker-0-j4pkp
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1012510000
    seLinuxOptions:
      level: s0:c112,c39
  serviceAccount: deployer
  serviceAccountName: deployer
  shareProcessNamespace: false
  terminationGracePeriodSeconds: 10
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-mtlb2
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:22:56Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:23:52Z"
    message: 'containers with unready status: [deployment]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:23:52Z"
    message: 'containers with unready status: [deployment]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-10-13T10:22:56Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://d1e13160a20d8b4d32052c590aa4e9db0345d863b8cab28925913f238059b5b8
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902
    imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902
    lastState: {}
    name: deployment
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: cri-o://d1e13160a20d8b4d32052c590aa4e9db0345d863b8cab28925913f238059b5b8
        exitCode: 1
        finishedAt: "2022-10-13T10:23:52Z"
        reason: Error
        startedAt: "2022-10-13T10:23:22Z"
  hostIP: 10.196.0.199
  phase: Failed
  podIP: 10.128.221.50
  podIPs:
  - ip: 10.128.221.50
  qosClass: BestEffort
  startTime: "2022-10-13T10:22:56Z"

Oct 13 10:23:55.124: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 logs pod/tag-images-1-deploy --timestamps=true'
Oct 13 10:23:55.276: INFO: --- pod tag-images-1-deploy logs
2022-10-13T10:23:52.378254866Z error: couldn't get deployment tag-images-1: Get "https://172.30.0.1:443/api/v1/namespaces/e2e-test-cli-deployment-rxkqx/replicationcontrollers/tag-images-1": dial tcp 172.30.0.1:443: i/o timeout---

Oct 13 10:23:55.276: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get istag -o wide'
Oct 13 10:23:55.396: INFO: 
No resources found in e2e-test-cli-deployment-rxkqx namespace.

[AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/deployments/deployments.go:71
[AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-cli-deployment-rxkqx".
STEP: Found 6 events.
Oct 13 10:23:57.407: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for tag-images-1-deploy: { } Scheduled: Successfully assigned e2e-test-cli-deployment-rxkqx/tag-images-1-deploy to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:23:57.407: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for tag-images: {deploymentconfig-controller } DeploymentCreated: Created new replication controller "tag-images-1" for version 1
Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:21 +0000 UTC - event for tag-images-1-deploy: {multus } AddedInterface: Add eth0 [10.128.221.50/23] from kuryr
Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for tag-images-1-deploy: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902" already present on machine
Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for tag-images-1-deploy: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container deployment
Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for tag-images-1-deploy: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container deployment
Oct 13 10:23:57.413: INFO: POD                  NODE                         PHASE   GRACE  CONDITIONS
Oct 13 10:23:57.413: INFO: tag-images-1-deploy  ostest-n5rnf-worker-0-j4pkp  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:52 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:52 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC  }]
Oct 13 10:23:57.413: INFO: 
Oct 13 10:23:57.419: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:23:57.435: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-cli-deployment-rxkqx-user}, err: <nil>
Oct 13 10:23:57.448: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-cli-deployment-rxkqx}, err: <nil>
Oct 13 10:23:57.475: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~SLEFsh6mB8xKXpk8QzkEVt-sunNz4Ziab7RnRyJ_x_w}, err: <nil>
[AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-cli-deployment-rxkqx" for this suite.
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:481]: Unexpected error:
    <*errors.errorString | 0xc001d76e70>: {
        s: "deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed",
    }
    deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed
occurred

Stderr
_sig-auth__Feature_ProjectAPI___TestScopedProjectAccess_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 34.0s

_sig-operator__OLM_should_Implement_packages_API_server_and_list_packagemanifest_info_with_namespace_not_NULL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.7s

_sig-imageregistry__Feature_Image__signature_TestImageAddSignature__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.0s

_sig-builds__Feature_Builds__clone_repository_using_git_//_protocol__should_clone_using_git_//_if_no_proxy_is_configured__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 14.2s

Skipped: skip [github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:40]: test disabled due to https://bugzilla.redhat.com/show_bug.cgi?id=2019433 and https://github.blog/2021-09-01-improving-git-protocol-security-github/#git-protocol-troubleshooting: 'The unauthenticated git protocol on port 9418 is no longer supported'
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-builds][Feature:Builds] clone repository using git:// protocol
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-builds][Feature:Builds] clone repository using git:// protocol
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:22:41.185: INFO: configPath is now "/tmp/configfile458335381"
Oct 13 10:22:41.185: INFO: The user is now "e2e-test-build-clone-git-protocol-hm7qz-user"
Oct 13 10:22:41.185: INFO: Creating project "e2e-test-build-clone-git-protocol-hm7qz"
Oct 13 10:22:41.430: INFO: Waiting on permissions in project "e2e-test-build-clone-git-protocol-hm7qz" ...
Oct 13 10:22:41.439: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:22:41.560: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:22:41.674: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:22:41.786: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:22:41.797: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:22:41.823: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:22:42.468: INFO: Project "e2e-test-build-clone-git-protocol-hm7qz" has been fully provisioned.
[BeforeEach] 
  github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:17
[JustBeforeEach] 
  github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:21
STEP: waiting for openshift namespace imagestreams
Oct 13 10:22:42.468: INFO: Waiting up to 2 minutes for the internal registry hostname to be published
Oct 13 10:22:44.549: INFO: the OCM pod logs indicate the build controller was started after the internal registry hostname has been set in the OCM config
Oct 13 10:22:44.564: INFO: OCM rollout progressing status reports complete
Oct 13 10:22:44.564: INFO: Scanning openshift ImageStreams 

Oct 13 10:22:54.577: INFO: SamplesOperator at steady state
Oct 13 10:22:54.578: INFO: SamplesOperator at steady state
Oct 13 10:22:54.578: INFO: Checking language ruby 

Oct 13 10:22:54.602: INFO: Checking tag {2.5-ubi8 map[description:Build and run Ruby 2.5 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.5/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.5 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.5,ruby tags:builder,ruby version:2.5] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-25:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c602f0 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {2.6 map[description:Build and run Ruby 2.6 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby,hidden version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-26-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60390 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {2.6-ubi7 map[description:Build and run Ruby 2.6 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60580 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {2.6-ubi8 map[description:Build and run Ruby 2.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60690 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {2.7 map[description:Build and run Ruby 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60730 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {2.7-ubi7 map[description:Build and run Ruby 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60800 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {2.7-ubi8 map[description:Build and run Ruby 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c608c0 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {3.0-ubi7 map[description:Build and run Ruby 3.0 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/3.0/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 3.0 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:3.0,ruby tags:builder,ruby version:3.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-30:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60980 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking tag {latest map[description:Build and run Ruby applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.7/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Ruby available on OpenShift, including major version updates. iconClass:icon-ruby openshift.io/display-name:Ruby (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby tags:builder,ruby] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2.7-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60a48 {false false} {Local}} 

Oct 13 10:22:54.603: INFO: Checking language nodejs 

Oct 13 10:22:54.618: INFO: Checking tag {12 map[description:Build and run Node.js 12 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/nodejs-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382130 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking tag {12-ubi7 map[description:Build and run Node.js 12 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382280 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking tag {12-ubi8 map[description:Build and run Node.js 12 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0003823d0 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking tag {14-ubi7 map[description:Build and run Node.js 14 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382530 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking tag {14-ubi8 map[description:Build and run Node.js 14 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0003827e0 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking tag {14-ubi8-minimal map[description:Build and run Node.js 14 applications on UBI 8 Minimal. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8 Minimal) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14-minimal:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382a90 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking tag {latest map[description:Build and run Node.js applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Node.js available on OpenShift, including major version updates. iconClass:icon-nodejs openshift.io/display-name:Node.js (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git supports:nodejs tags:builder,nodejs] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:14-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382c20 {false false} {Local}} 

Oct 13 10:22:54.618: INFO: Checking language perl 

Oct 13 10:22:54.636: INFO: Checking tag {5.26-ubi8 map[description:Build and run Perl 5.26 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.26-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.26 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.26,perl tags:builder,perl version:5.26] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-526:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc87a0 {false false} {Local}} 

Oct 13 10:22:54.636: INFO: Checking tag {5.30 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl,hidden version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8850 {false false} {Local}} 

Oct 13 10:22:54.636: INFO: Checking tag {5.30-el7 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8980 {false false} {Local}} 

Oct 13 10:22:54.636: INFO: Checking tag {5.30-ubi8 map[description:Build and run Perl 5.30 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-530:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8a40 {false false} {Local}} 

Oct 13 10:22:54.636: INFO: Checking tag {latest map[description:Build and run Perl applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Perl available on OpenShift, including major version updates. iconClass:icon-perl openshift.io/display-name:Perl (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl tags:builder,perl] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:5.30-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8b10 {false false} {Local}} 

Oct 13 10:22:54.636: INFO: Checking language php 

Oct 13 10:22:54.651: INFO: Checking tag {7.3 map[description:Build and run PHP 7.3 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php,hidden version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/php-73-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9db0 {false false} {Local}} 

Oct 13 10:22:54.651: INFO: Checking tag {7.3-ubi7 map[description:Build and run PHP 7.3 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9e70 {false false} {Local}} 

Oct 13 10:22:54.651: INFO: Checking tag {7.3-ubi8 map[description:Build and run PHP 7.3 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9f30 {false false} {Local}} 

Oct 13 10:22:54.651: INFO: Checking tag {7.4-ubi8 map[description:Build and run PHP 7.4 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.4 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.4,php tags:builder,php version:7.4] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-74:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9ff0 {false false} {Local}} 

Oct 13 10:22:54.651: INFO: Checking tag {latest map[description:Build and run PHP applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of PHP available on OpenShift, including major version updates. iconClass:icon-php openshift.io/display-name:PHP (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php tags:builder,php] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:7.4-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f200b0 {false false} {Local}} 

Oct 13 10:22:54.651: INFO: Checking language python 

Oct 13 10:22:54.668: INFO: Checking tag {2.7 map[description:Build and run Python 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21650 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {2.7-ubi7 map[description:Build and run Python 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f216f0 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {2.7-ubi8 map[description:Build and run Python 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21790 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {3.6-ubi8 map[description:Build and run Python 3.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.6/README.md. iconClass:icon-python openshift.io/display-name:Python 3.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.6,python tags:builder,python version:3.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-36:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21840 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {3.8 map[description:Build and run Python 3.8 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python,hidden version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-38-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f218e0 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {3.8-ubi7 map[description:Build and run Python 3.8 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21990 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {3.8-ubi8 map[description:Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21a40 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {3.9-ubi8 map[description:Build and run Python 3.9 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md. iconClass:icon-python openshift.io/display-name:Python 3.9 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.9,python tags:builder,python version:3.9] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-39:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21ae0 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking tag {latest map[description:Build and run Python applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Python available on OpenShift, including major version updates. iconClass:icon-python openshift.io/display-name:Python (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python tags:builder,python] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:3.9-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21bc0 {false false} {Local}} 

Oct 13 10:22:54.669: INFO: Checking language mysql 

Oct 13 10:22:54.688: INFO: Checking tag {8.0 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 openshift.io/provider-display-name:Red Hat, Inc. tags:mysql,hidden version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c2490 {false false} {Local}} 

Oct 13 10:22:54.689: INFO: Checking tag {8.0-el7 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c24f0 {false false} {Local}} 

Oct 13 10:22:54.689: INFO: Checking tag {8.0-el8 map[description:Provides a MySQL 8.0 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/mysql-80:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c2550 {false false} {Local}} 

Oct 13 10:22:54.689: INFO: Checking tag {latest map[description:Provides a MySQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of MySQL available on OpenShift, including major version updates. iconClass:icon-mysql-database openshift.io/display-name:MySQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:8.0-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c25e0 {false false} {Local}} 

Oct 13 10:22:54.689: INFO: Checking language postgresql 

Oct 13 10:22:54.710: INFO: Checking tag {10 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 10 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc100 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {10-el7 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc170 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {10-el8 map[description:Provides a PostgreSQL 10 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-10:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc1e0 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {12 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 12 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc250 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {12-el7 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc2c0 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {12-el8 map[description:Provides a PostgreSQL 12 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc330 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {13-el7 map[description:Provides a PostgreSQL 13 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-13-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc3a0 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {13-el8 map[description:Provides a PostgreSQL 13 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-13:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc410 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {9.6-el8 map[description:Provides a PostgreSQL 9.6 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 9.6 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:9.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-96:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc480 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking tag {latest map[description:Provides a PostgreSQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of PostgreSQL available on OpenShift, including major version updates. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:13-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc508 {false false} {Local}} 

Oct 13 10:22:54.710: INFO: Checking language jenkins 

Oct 13 10:22:54.744: INFO: Checking tag {2 map[description:Provides a Jenkins 2.X server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md. iconClass:icon-jenkins openshift.io/display-name:Jenkins 2.X openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins version:2.x] &ObjectReference{Kind:DockerImage,Namespace:,Name:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba54e46eebfe50a572eb683ebc0960d5c682635e4640b480c7274bb9fa81e26,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0018eece0 {false false} {Local}} 

Oct 13 10:22:54.744: INFO: Checking tag {latest map[description:Provides a Jenkins server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Jenkins available on OpenShift, including major versions updates. iconClass:icon-jenkins openshift.io/display-name:Jenkins (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0018eed68 {false false} {Local}} 

Oct 13 10:22:54.745: INFO: Success! 

[It] should clone using git:// if no proxy is configured [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:36
[AfterEach] 
  github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:28
[AfterEach] [sig-builds][Feature:Builds] clone repository using git:// protocol
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:22:54.780: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-build-clone-git-protocol-hm7qz-user}, err: <nil>
Oct 13 10:22:54.798: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-build-clone-git-protocol-hm7qz}, err: <nil>
Oct 13 10:22:54.813: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~N4poXk5eRlIp6hqij4C0cbqcI3dV_o9-qfGC8hhJr-0}, err: <nil>
[AfterEach] [sig-builds][Feature:Builds] clone repository using git:// protocol
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-build-clone-git-protocol-hm7qz" for this suite.
skip [github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:40]: test disabled due to https://bugzilla.redhat.com/show_bug.cgi?id=2019433 and https://github.blog/2021-09-01-improving-git-protocol-security-github/#git-protocol-troubleshooting: 'The unauthenticated git protocol on port 9418 is no longer supported'

Stderr
_sig-builds__Feature_Builds__oc_new-app__should_succeed_with_a_--name_of_58_characters__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 146.0s

Failed:
fail [github.com/openshift/origin/test/extended/builds/new_app.go:68]: Unexpected error:
    <*errors.errorString | 0xc00295bda0>: {
        s: "The build \"a234567890123456789012345678901234567890123456789012345678-1\" status is \"Failed\"",
    }
    The build "a234567890123456789012345678901234567890123456789012345678-1" status is "Failed"
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-builds][Feature:Builds] oc new-app
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-builds][Feature:Builds] oc new-app
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:22:39.300: INFO: configPath is now "/tmp/configfile303236974"
Oct 13 10:22:39.300: INFO: The user is now "e2e-test-new-app-wckrp-user"
Oct 13 10:22:39.300: INFO: Creating project "e2e-test-new-app-wckrp"
Oct 13 10:22:39.579: INFO: Waiting on permissions in project "e2e-test-new-app-wckrp" ...
Oct 13 10:22:39.594: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:22:39.702: INFO: Waiting for service account "default" secrets (default-token-cr5gb) to include dockercfg/token ...
Oct 13 10:22:39.817: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:22:39.928: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:22:40.036: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:22:40.046: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:22:40.052: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:22:40.580: INFO: Project "e2e-test-new-app-wckrp" has been fully provisioned.
[BeforeEach] 
  github.com/openshift/origin/test/extended/builds/new_app.go:32
[JustBeforeEach] 
  github.com/openshift/origin/test/extended/builds/new_app.go:36
STEP: waiting on the local namespace builder/default SAs
STEP: waiting for openshift namespace imagestreams
Oct 13 10:22:40.793: INFO: Waiting up to 2 minutes for the internal registry hostname to be published
Oct 13 10:22:42.870: INFO: the OCM pod logs indicate the build controller was started after the internal registry hostname has been set in the OCM config
Oct 13 10:22:42.883: INFO: OCM rollout progressing status reports complete
Oct 13 10:22:42.883: INFO: Scanning openshift ImageStreams 

Oct 13 10:22:52.909: INFO: SamplesOperator at steady state
Oct 13 10:22:52.909: INFO: SamplesOperator at steady state
Oct 13 10:22:52.909: INFO: Checking language ruby 

Oct 13 10:22:52.954: INFO: Checking tag {2.5-ubi8 map[description:Build and run Ruby 2.5 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.5/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.5 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.5,ruby tags:builder,ruby version:2.5] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-25:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9430 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {2.6 map[description:Build and run Ruby 2.6 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby,hidden version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-26-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f94d0 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {2.6-ubi7 map[description:Build and run Ruby 2.6 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f95b0 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {2.6-ubi8 map[description:Build and run Ruby 2.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9670 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {2.7 map[description:Build and run Ruby 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9710 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {2.7-ubi7 map[description:Build and run Ruby 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f97d0 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {2.7-ubi8 map[description:Build and run Ruby 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9890 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {3.0-ubi7 map[description:Build and run Ruby 3.0 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/3.0/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 3.0 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:3.0,ruby tags:builder,ruby version:3.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-30:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9950 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking tag {latest map[description:Build and run Ruby applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.7/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Ruby available on OpenShift, including major version updates. iconClass:icon-ruby openshift.io/display-name:Ruby (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby tags:builder,ruby] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2.7-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9a18 {false false} {Local}} 

Oct 13 10:22:52.954: INFO: Checking language nodejs 

Oct 13 10:22:52.988: INFO: Checking tag {12 map[description:Build and run Node.js 12 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/nodejs-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c34f80 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking tag {12-ubi7 map[description:Build and run Node.js 12 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35000 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking tag {12-ubi8 map[description:Build and run Node.js 12 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35090 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking tag {14-ubi7 map[description:Build and run Node.js 14 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35110 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking tag {14-ubi8 map[description:Build and run Node.js 14 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c351a0 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking tag {14-ubi8-minimal map[description:Build and run Node.js 14 applications on UBI 8 Minimal. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8 Minimal) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14-minimal:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35240 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking tag {latest map[description:Build and run Node.js applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Node.js available on OpenShift, including major version updates. iconClass:icon-nodejs openshift.io/display-name:Node.js (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git supports:nodejs tags:builder,nodejs] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:14-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35310 {false false} {Local}} 

Oct 13 10:22:52.988: INFO: Checking language perl 

Oct 13 10:22:53.002: INFO: Checking tag {5.26-ubi8 map[description:Build and run Perl 5.26 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.26-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.26 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.26,perl tags:builder,perl version:5.26] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-526:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761a20 {false false} {Local}} 

Oct 13 10:22:53.003: INFO: Checking tag {5.30 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl,hidden version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761b20 {false false} {Local}} 

Oct 13 10:22:53.003: INFO: Checking tag {5.30-el7 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761cb0 {false false} {Local}} 

Oct 13 10:22:53.003: INFO: Checking tag {5.30-ubi8 map[description:Build and run Perl 5.30 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-530:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761d80 {false false} {Local}} 

Oct 13 10:22:53.003: INFO: Checking tag {latest map[description:Build and run Perl applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Perl available on OpenShift, including major version updates. iconClass:icon-perl openshift.io/display-name:Perl (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl tags:builder,perl] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:5.30-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761e50 {false false} {Local}} 

Oct 13 10:22:53.003: INFO: Checking language php 

Oct 13 10:22:53.015: INFO: Checking tag {7.3 map[description:Build and run PHP 7.3 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php,hidden version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/php-73-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d0a0 {false false} {Local}} 

Oct 13 10:22:53.016: INFO: Checking tag {7.3-ubi7 map[description:Build and run PHP 7.3 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d160 {false false} {Local}} 

Oct 13 10:22:53.016: INFO: Checking tag {7.3-ubi8 map[description:Build and run PHP 7.3 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d220 {false false} {Local}} 

Oct 13 10:22:53.016: INFO: Checking tag {7.4-ubi8 map[description:Build and run PHP 7.4 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.4 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.4,php tags:builder,php version:7.4] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-74:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d2e0 {false false} {Local}} 

Oct 13 10:22:53.016: INFO: Checking tag {latest map[description:Build and run PHP applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of PHP available on OpenShift, including major version updates. iconClass:icon-php openshift.io/display-name:PHP (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php tags:builder,php] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:7.4-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d3a0 {false false} {Local}} 

Oct 13 10:22:53.016: INFO: Checking language python 

Oct 13 10:22:53.030: INFO: Checking tag {2.7 map[description:Build and run Python 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8180 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {2.7-ubi7 map[description:Build and run Python 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8220 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {2.7-ubi8 map[description:Build and run Python 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d82c0 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {3.6-ubi8 map[description:Build and run Python 3.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.6/README.md. iconClass:icon-python openshift.io/display-name:Python 3.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.6,python tags:builder,python version:3.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-36:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8360 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {3.8 map[description:Build and run Python 3.8 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python,hidden version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-38-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8400 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {3.8-ubi7 map[description:Build and run Python 3.8 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d84a0 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {3.8-ubi8 map[description:Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8540 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {3.9-ubi8 map[description:Build and run Python 3.9 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md. iconClass:icon-python openshift.io/display-name:Python 3.9 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.9,python tags:builder,python version:3.9] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-39:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d85e0 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking tag {latest map[description:Build and run Python applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Python available on OpenShift, including major version updates. iconClass:icon-python openshift.io/display-name:Python (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python tags:builder,python] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:3.9-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d86b0 {false false} {Local}} 

Oct 13 10:22:53.030: INFO: Checking language mysql 

Oct 13 10:22:53.044: INFO: Checking tag {8.0 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 openshift.io/provider-display-name:Red Hat, Inc. tags:mysql,hidden version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d98c0 {false false} {Local}} 

Oct 13 10:22:53.044: INFO: Checking tag {8.0-el7 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d9920 {false false} {Local}} 

Oct 13 10:22:53.044: INFO: Checking tag {8.0-el8 map[description:Provides a MySQL 8.0 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/mysql-80:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d9980 {false false} {Local}} 

Oct 13 10:22:53.044: INFO: Checking tag {latest map[description:Provides a MySQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of MySQL available on OpenShift, including major version updates. iconClass:icon-mysql-database openshift.io/display-name:MySQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:8.0-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d9a10 {false false} {Local}} 

Oct 13 10:22:53.044: INFO: Checking language postgresql 

Oct 13 10:22:53.098: INFO: Checking tag {10 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 10 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6820 {false false} {Local}} 

Oct 13 10:22:53.098: INFO: Checking tag {10-el7 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6890 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {10-el8 map[description:Provides a PostgreSQL 10 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-10:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6900 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {12 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 12 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6970 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {12-el7 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e69e0 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {12-el8 map[description:Provides a PostgreSQL 12 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6a50 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {13-el7 map[description:Provides a PostgreSQL 13 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-13-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6ac0 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {13-el8 map[description:Provides a PostgreSQL 13 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-13:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6b30 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {9.6-el8 map[description:Provides a PostgreSQL 9.6 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 9.6 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:9.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-96:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6ba0 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking tag {latest map[description:Provides a PostgreSQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of PostgreSQL available on OpenShift, including major version updates. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:13-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6c28 {false false} {Local}} 

Oct 13 10:22:53.099: INFO: Checking language jenkins 

Oct 13 10:22:53.110: INFO: Checking tag {2 map[description:Provides a Jenkins 2.X server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md. iconClass:icon-jenkins openshift.io/display-name:Jenkins 2.X openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins version:2.x] &ObjectReference{Kind:DockerImage,Namespace:,Name:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba54e46eebfe50a572eb683ebc0960d5c682635e4640b480c7274bb9fa81e26,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e7d70 {false false} {Local}} 

Oct 13 10:22:53.110: INFO: Checking tag {latest map[description:Provides a Jenkins server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md.

WARNING: By selecting this tag, your application will automatically update to use the latest version of Jenkins available on OpenShift, including major versions updates. iconClass:icon-jenkins openshift.io/display-name:Jenkins (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e7df8 {false false} {Local}} 

Oct 13 10:22:53.110: INFO: Success! 

[It] should succeed with a --name of 58 characters [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/builds/new_app.go:57
STEP: calling oc new-app
Oct 13 10:22:53.110: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 new-app https://github.com/sclorg/nodejs-ex --name a234567890123456789012345678901234567890123456789012345678 --build-env=BUILD_LOGLEVEL=5'
--> Found image 33ddc20 (5 weeks old) in image stream "openshift/nodejs" under tag "14-ubi8" for "nodejs"

    Node.js 14 
    ---------- 
    Node.js 14 available as container is a base platform for building and running various Node.js 14 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

    Tags: builder, nodejs, nodejs14

    * The source repository appears to match: nodejs
    * A source build using source code from https://github.com/sclorg/nodejs-ex will be created
      * The resulting image will be pushed to image stream tag "a234567890123456789012345678901234567890123456789012345678:latest"
      * Use 'oc start-build' to trigger a new build

--> Creating resources ...
    imagestream.image.openshift.io "a234567890123456789012345678901234567890123456789012345678" created
    buildconfig.build.openshift.io "a234567890123456789012345678901234567890123456789012345678" created
    deployment.apps "a234567890123456789012345678901234567890123456789012345678" created
    service "a234567890123456789012345678901234567890123456789012345678" created
--> Success
    Build scheduled, use 'oc logs -f buildconfig/a234567890123456789012345678901234567890123456789012345678' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose service/a234567890123456789012345678901234567890123456789012345678' 
    Run 'oc status' to view your app.
STEP: waiting for the build to complete
Oct 13 10:25:01.947: INFO: WaitForABuild returning with error: The build "a234567890123456789012345678901234567890123456789012345678-1" status is "Failed"
Oct 13 10:25:01.948: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs -f bc/a234567890123456789012345678901234567890123456789012345678 --timestamps'
Oct 13 10:25:02.176: INFO: 

  build logs : 2022-10-13T10:23:23.406744754Z I1013 10:23:23.406661       1 builder.go:393] openshift-builder 4.9.0-202210061647.p0.g1a32676.assembly.stream-1a32676
2022-10-13T10:23:23.406940329Z I1013 10:23:23.406922       1 builder.go:393] Powered by buildah v1.22.4
2022-10-13T10:23:23.415829487Z I1013 10:23:23.415787       1 builder.go:394] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
2022-10-13T10:23:23.416849352Z Cloning "https://github.com/sclorg/nodejs-ex" ...
2022-10-13T10:23:23.416873736Z I1013 10:23:23.416849       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
2022-10-13T10:23:23.416873736Z I1013 10:23:23.416865       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
2022-10-13T10:23:39.417973933Z I1013 10:23:39.417875       1 repository.go:545] Command execution timed out after 16s
2022-10-13T10:23:39.418108783Z WARNING: timed out waiting for git server, will wait 1m4s2022-10-13T10:23:39.418149158Z 
2022-10-13T10:23:39.418181835Z I1013 10:23:39.418170       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
2022-10-13T10:23:39.418225370Z I1013 10:23:39.418214       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
2022-10-13T10:23:59.503403201Z I1013 10:23:59.503331       1 repository.go:541] Error executing command: exit status 128
2022-10-13T10:23:59.503554375Z I1013 10:23:59.503536       1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
2022-10-13T10:24:59.655751191Z error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com


Oct 13 10:25:02.176: INFO: Dumping pod state for namespace e2e-test-new-app-wckrp
Oct 13 10:25:02.176: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config get pods -o yaml'
Oct 13 10:25:02.371: INFO: apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.165.125"
            ],
            "mac": "fa:16:3e:31:30:74",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.165.125"
            ],
            "mac": "fa:16:3e:31:30:74",
            "default": true,
            "dns": {}
        }]
      openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1
      openshift.io/scc: privileged
    creationTimestamp: "2022-10-13T10:22:56Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    labels:
      openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1
    name: a234567890123456789012345678901234567890123456789012345678-1-build
    namespace: e2e-test-new-app-wckrp
    ownerReferences:
    - apiVersion: build.openshift.io/v1
      controller: true
      kind: Build
      name: a234567890123456789012345678901234567890123456789012345678-1
      uid: e4b59e1a-94a3-4d33-a826-9b209b205ee1
    resourceVersion: "955211"
    uid: cd09e5be-7847-4742-8f63-c558a46f2b21
  spec:
    activeDeadlineSeconds: 604800
    containers:
    - args:
      - openshift-sti-build
      - --loglevel=5
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex
      - name: BUILD_LOGLEVEL
        value: "5"
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: PUSH_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/push
      - name: PULL_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/pull
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_STORAGE_DRIVER
        value: overlay
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: sti-build
      resources: {}
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/lib/kubelet/config.json
        name: node-pullsecrets
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/lib/containers/cache
        name: buildcachedir
      - mountPath: /var/run/secrets/openshift.io/push
        name: builder-dockercfg-xsbfr-push
        readOnly: true
      - mountPath: /var/run/secrets/openshift.io/pull
        name: builder-dockercfg-xsbfr-pull
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/lib/containers/storage
        name: container-storage-root
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lx97v
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: builder-dockercfg-xsbfr
    initContainers:
    - args:
      - openshift-git-clone
      - --loglevel=5
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex
      - name: BUILD_LOGLEVEL
        value: "5"
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: git-clone
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lx97v
        readOnly: true
    - args:
      - openshift-manage-dockerfile
      - --loglevel=5
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex
      - name: BUILD_LOGLEVEL
        value: "5"
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: manage-dockerfile
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lx97v
        readOnly: true
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Never
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: builder
    serviceAccountName: builder
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - hostPath:
        path: /var/lib/kubelet/config.json
        type: File
      name: node-pullsecrets
    - hostPath:
        path: /var/lib/containers/cache
        type: ""
      name: buildcachedir
    - emptyDir: {}
      name: buildworkdir
    - name: builder-dockercfg-xsbfr-push
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-xsbfr
    - name: builder-dockercfg-xsbfr-pull
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-xsbfr
    - configMap:
        defaultMode: 420
        name: a234567890123456789012345678901234567890123456789012345678-1-sys-config
      name: build-system-configs
    - configMap:
        defaultMode: 420
        items:
        - key: service-ca.crt
          path: certs.d/image-registry.openshift-image-registry.svc:5000/ca.crt
        name: a234567890123456789012345678901234567890123456789012345678-1-ca
      name: build-ca-bundles
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: a234567890123456789012345678901234567890123456789012345678-1-global-ca
      name: build-proxy-ca-bundles
    - emptyDir: {}
      name: container-storage-root
    - emptyDir: {}
      name: build-blob-cache
    - name: kube-api-access-lx97v
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      message: 'containers with incomplete status: [git-clone manage-dockerfile]'
      reason: ContainersNotInitialized
      status: "False"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      message: 'containers with unready status: [sti-build]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      message: 'containers with unready status: [sti-build]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: ""
      lastState: {}
      name: sti-build
      ready: false
      restartCount: 0
      started: false
      state:
        waiting:
          reason: PodInitializing
    hostIP: 10.196.2.169
    initContainerStatuses:
    - containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: git-clone
      ready: false
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0
          exitCode: 1
          finishedAt: "2022-10-13T10:24:59Z"
          message: |
            value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
            Cloning "https://github.com/sclorg/nodejs-ex" ...
            I1013 10:23:23.416849       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:23.416865       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:39.417875       1 repository.go:545] Command execution timed out after 16s
            WARNING: timed out waiting for git server, will wait 1m4s
            I1013 10:23:39.418170       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:39.418214       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:59.503331       1 repository.go:541] Error executing command: exit status 128
            I1013 10:23:59.503536       1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
            error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
          reason: Error
          startedAt: "2022-10-13T10:23:23Z"
    - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: ""
      lastState: {}
      name: manage-dockerfile
      ready: false
      restartCount: 0
      state:
        waiting:
          reason: PodInitializing
    phase: Pending
    podIP: 10.128.165.125
    podIPs:
    - ip: 10.128.165.125
    qosClass: BestEffort
    startTime: "2022-10-13T10:22:56Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
[AfterEach] 
  github.com/openshift/origin/test/extended/builds/new_app.go:47
Oct 13 10:25:02.372: INFO: Dumping pod state for namespace e2e-test-new-app-wckrp
Oct 13 10:25:02.372: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config get pods -o yaml'
Oct 13 10:25:02.555: INFO: apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.165.125"
            ],
            "mac": "fa:16:3e:31:30:74",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.165.125"
            ],
            "mac": "fa:16:3e:31:30:74",
            "default": true,
            "dns": {}
        }]
      openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1
      openshift.io/scc: privileged
    creationTimestamp: "2022-10-13T10:22:56Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    labels:
      openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1
    name: a234567890123456789012345678901234567890123456789012345678-1-build
    namespace: e2e-test-new-app-wckrp
    ownerReferences:
    - apiVersion: build.openshift.io/v1
      controller: true
      kind: Build
      name: a234567890123456789012345678901234567890123456789012345678-1
      uid: e4b59e1a-94a3-4d33-a826-9b209b205ee1
    resourceVersion: "955279"
    uid: cd09e5be-7847-4742-8f63-c558a46f2b21
  spec:
    activeDeadlineSeconds: 604800
    containers:
    - args:
      - openshift-sti-build
      - --loglevel=5
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex
      - name: BUILD_LOGLEVEL
        value: "5"
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: PUSH_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/push
      - name: PULL_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/pull
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_STORAGE_DRIVER
        value: overlay
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: sti-build
      resources: {}
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/lib/kubelet/config.json
        name: node-pullsecrets
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/lib/containers/cache
        name: buildcachedir
      - mountPath: /var/run/secrets/openshift.io/push
        name: builder-dockercfg-xsbfr-push
        readOnly: true
      - mountPath: /var/run/secrets/openshift.io/pull
        name: builder-dockercfg-xsbfr-pull
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/lib/containers/storage
        name: container-storage-root
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lx97v
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: builder-dockercfg-xsbfr
    initContainers:
    - args:
      - openshift-git-clone
      - --loglevel=5
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex
      - name: BUILD_LOGLEVEL
        value: "5"
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: git-clone
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lx97v
        readOnly: true
    - args:
      - openshift-manage-dockerfile
      - --loglevel=5
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex
      - name: BUILD_LOGLEVEL
        value: "5"
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: manage-dockerfile
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-lx97v
        readOnly: true
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Never
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: builder
    serviceAccountName: builder
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - hostPath:
        path: /var/lib/kubelet/config.json
        type: File
      name: node-pullsecrets
    - hostPath:
        path: /var/lib/containers/cache
        type: ""
      name: buildcachedir
    - emptyDir: {}
      name: buildworkdir
    - name: builder-dockercfg-xsbfr-push
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-xsbfr
    - name: builder-dockercfg-xsbfr-pull
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-xsbfr
    - configMap:
        defaultMode: 420
        name: a234567890123456789012345678901234567890123456789012345678-1-sys-config
      name: build-system-configs
    - configMap:
        defaultMode: 420
        items:
        - key: service-ca.crt
          path: certs.d/image-registry.openshift-image-registry.svc:5000/ca.crt
        name: a234567890123456789012345678901234567890123456789012345678-1-ca
      name: build-ca-bundles
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: a234567890123456789012345678901234567890123456789012345678-1-global-ca
      name: build-proxy-ca-bundles
    - emptyDir: {}
      name: container-storage-root
    - emptyDir: {}
      name: build-blob-cache
    - name: kube-api-access-lx97v
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      message: 'containers with incomplete status: [git-clone manage-dockerfile]'
      reason: ContainersNotInitialized
      status: "False"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      message: 'containers with unready status: [sti-build]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      message: 'containers with unready status: [sti-build]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:22:56Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: ""
      lastState: {}
      name: sti-build
      ready: false
      restartCount: 0
      started: false
      state:
        waiting:
          reason: PodInitializing
    hostIP: 10.196.2.169
    initContainerStatuses:
    - containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: git-clone
      ready: false
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0
          exitCode: 1
          finishedAt: "2022-10-13T10:24:59Z"
          message: |
            value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
            Cloning "https://github.com/sclorg/nodejs-ex" ...
            I1013 10:23:23.416849       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:23.416865       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:39.417875       1 repository.go:545] Command execution timed out after 16s
            WARNING: timed out waiting for git server, will wait 1m4s
            I1013 10:23:39.418170       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:39.418214       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
            I1013 10:23:59.503331       1 repository.go:541] Error executing command: exit status 128
            I1013 10:23:59.503536       1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
            error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
          reason: Error
          startedAt: "2022-10-13T10:23:23Z"
    - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: ""
      lastState: {}
      name: manage-dockerfile
      ready: false
      restartCount: 0
      state:
        waiting:
          reason: PodInitializing
    phase: Failed
    podIP: 10.128.165.125
    podIPs:
    - ip: 10.128.165.125
    qosClass: BestEffort
    startTime: "2022-10-13T10:22:56Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
Oct 13 10:25:02.555: INFO: Dumping configMap state for namespace e2e-test-new-app-wckrp
Oct 13 10:25:02.556: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config get configmaps -o yaml'
Oct 13 10:25:02.745: INFO: apiVersion: v1
items:
- apiVersion: v1
  data:
    service-ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE
      Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe
      Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z
      aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG
      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt
      HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2
      C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf
      9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m
      mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02
      k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh
      MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313
      5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6
      STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn
      6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3
      tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM
      CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e
      zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB
      MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:22:56Z"
    name: a234567890123456789012345678901234567890123456789012345678-1-ca
    namespace: e2e-test-new-app-wckrp
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: a234567890123456789012345678901234567890123456789012345678-1-build
      uid: cd09e5be-7847-4742-8f63-c558a46f2b21
    resourceVersion: "951626"
    uid: 63de9da3-3fef-4f31-9152-5b13dcd95571
- apiVersion: v1
  data:
    ca-bundle.crt: ""
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:22:56Z"
    name: a234567890123456789012345678901234567890123456789012345678-1-global-ca
    namespace: e2e-test-new-app-wckrp
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: a234567890123456789012345678901234567890123456789012345678-1-build
      uid: cd09e5be-7847-4742-8f63-c558a46f2b21
    resourceVersion: "951635"
    uid: 0926106b-b07c-4664-bdfc-a3d3946485ba
- apiVersion: v1
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:22:56Z"
    name: a234567890123456789012345678901234567890123456789012345678-1-sys-config
    namespace: e2e-test-new-app-wckrp
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: a234567890123456789012345678901234567890123456789012345678-1-build
      uid: cd09e5be-7847-4742-8f63-c558a46f2b21
    resourceVersion: "951631"
    uid: f5d615ad-2e65-4465-a2e9-b00d5dfc8761
- apiVersion: v1
  data:
    ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDMjCCAhqgAwIBAgIILN1CKhOBc2UwDQYJKoZIhvcNAQELBQAwNzESMBAGA1UE
      CxMJb3BlbnNoaWZ0MSEwHwYDVQQDExhrdWJlLWFwaXNlcnZlci1sYi1zaWduZXIw
      HhcNMjIxMDExMTYwMjIzWhcNMzIxMDA4MTYwMjIzWjA3MRIwEAYDVQQLEwlvcGVu
      c2hpZnQxITAfBgNVBAMTGGt1YmUtYXBpc2VydmVyLWxiLXNpZ25lcjCCASIwDQYJ
      KoZIhvcNAQEBBQADggEPADCCAQoCggEBANuVs0Z9M+eZOvZAbxX1JEXhGJ7cFlW+
      q1ZHT9zSgI6Riga/Jw/NjL+kjnhxsqz3ez/aDsva2zPmXaOZ2FjW7peUOMh089n0
      n5WbEB0tBNCZCBOpXvWu3/2wqfLfa8hl+YpbU+pQvO7mXqMdrIzinJpLbl20HlfA
      jlhTWSGAPqZft4hJzjel2SZiIUlCnp7FrEG42JFxREExuSkoPLhWRC0xfFB5pA9V
      JklEsBVb23M4Vti/BfwukvAiplx2X69+Qc9fXm7i+L45eSc9yQss5X67/1z7RsPa
      n3708K8JGFeXYuJ6nYQooQbhj3cvxtY31TPxIKcQE1FJa0Qmft+VYZkCAwEAAaNC
      MEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJlv
      mLJKYamTvm9Ks5bqMTNNbuFwMA0GCSqGSIb3DQEBCwUAA4IBAQCMEXtW2kb4gCyF
      NqW2f5ABK+9eMe9MjGUNYDY2kdYMwiw/nz89kwt/a3Ck5mTHnZIENNjTkYdv2wTC
      DFFCXQJFbSqyCpfEaTuCRpsBM4sZJrZdpjW74aqo7KwyQ3Gm9fClJuGfa2QF/gWU
      v7QF/8u732NVWC6DUUzu6xBMrTDnOjtKeMJ5PvfUpZv9u/RvWmkHBpQZfroBvuDy
      8PDJUjgJj0k/gIXljO3K9yLUHw76lKimmXdn5JR/UjZasQVY3t5FMDt1No6VjpLt
      811ELzxHsYsrzbeKlzBbZko1EIhIV9b5DXmykivnucJJC6gNrXnd4RMp/yHrdluN
      e5IpzDw7
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDQDCCAiigAwIBAgIICo9mBwuOce4wDQYJKoZIhvcNAQELBQAwPjESMBAGA1UE
      CxMJb3BlbnNoaWZ0MSgwJgYDVQQDEx9rdWJlLWFwaXNlcnZlci1sb2NhbGhvc3Qt
      c2lnbmVyMB4XDTIyMTAxMTE2MDIyMloXDTMyMTAwODE2MDIyMlowPjESMBAGA1UE
      CxMJb3BlbnNoaWZ0MSgwJgYDVQQDEx9rdWJlLWFwaXNlcnZlci1sb2NhbGhvc3Qt
      c2lnbmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqeSZnR3XSMrI
      As3vxbqT8KadC2vLa1Sv5VnnMnEaMzuJ0R0AwIgLDOVhNQKMN6KKnrHcdXhBuBT9
      kSgSKp4zlw65L7Eomgz2pGTqXrSL06xaXaxUXt7XxqDwEBEEueTacjSEkFbuSVLs
      x9alZYzg9ExhAz7za665/03tTEa+4bglAwqnw7/3xEauH7tyP+d3niLSewwXg8UF
      JtxZ7CHMKy/afV9+q61I6ULkj+V+Lt9eo11ucYTnJzmlGEac/n7fLj++lFwiafzq
      GxamgCaXBo6INUpX/8x2KZemHEXMYMRnsNHRmXjZi7PJIEP4doPxWEDS6reuS0P5
      urUkyOHfAQIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB
      /zAdBgNVHQ4EFgQUP6qELERdYc51gPE2PEiS/skbuEcwDQYJKoZIhvcNAQELBQAD
      ggEBABzoKx1Od3m2Koc5+g4SAFZT1+1LYBC8c+ew3v9mizzH6X5kXopdJkFZtHEN
      GBnd8Dlmjwu+DBppYWBvTz1/hC2+pZSVO4lbEWHeRB28unvzRfdT49OtADyCi0b4
      +Mr4C8BYb9FnfPXrMK1o7a8TW+NiV+Q5jeNnWSgqohV0U6peSFtHLWkfm3jF7xLL
      FrWPxiISIz37nPIIDdUrlNPVaNAI1kdynxC58faJJXfO+wWn/7ShvglL+sYhnL+K
      Fh2Nbqv6p+hBHLJ2BOLQNwuGDv2LNZ+/hHUCboDaSEBh0AhTiGYzLWvtMeF6WGGI
      HyS+I56cBeKvPQzlFdone09rvqo=
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDTDCCAjSgAwIBAgIIOwJx6MDGIWYwDQYJKoZIhvcNAQELBQAwRDESMBAGA1UE
      CxMJb3BlbnNoaWZ0MS4wLAYDVQQDEyVrdWJlLWFwaXNlcnZlci1zZXJ2aWNlLW5l
      dHdvcmstc2lnbmVyMB4XDTIyMTAxMTE2MDIyMloXDTMyMTAwODE2MDIyMlowRDES
      MBAGA1UECxMJb3BlbnNoaWZ0MS4wLAYDVQQDEyVrdWJlLWFwaXNlcnZlci1zZXJ2
      aWNlLW5ldHdvcmstc2lnbmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
      AQEAsC+Rrx7G1shNCywb0QxGuLYAzSoo3ML6l2KVR9NHydMQBDiOFd0+Sc7mczzu
      DoA70JPRyApjCm2QsZ1hNGV4WvDYzYemVQJgN1h8ogooohJNGieN9fnkfTiG96Sz
      0klaylWtr2WF0W6zyDMjT9DaRdQl9Th1lNBUFF3cwY+XIzzSZdS1ErUj1H6rzcdh
      HDoLmsuKkU9iQXDaOEhZ6xVEEF0P9Ich9PhsDjut6mmyC+bAOMNd+nqgzeX1JCC/
      wlEhSV6TWIhxj5N8Ug/lsevxtq0HQLMaBowCmjBzuvc93WfndxGzcWFKqjNq5ZMW
      j8qbGel+3n0buQrjsE8384bAbwIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYD
      VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUBWOF9EVp9ugxbTYWOonVZLpqHjUwDQYJ
      KoZIhvcNAQELBQADggEBAIoS1fo2hRMp0iBRzIkl7B6ELDmWl7t6lZVp9qxYgbk+
      O5eBuuh5b4ZDKwFt74IlvLvXJTESGMrEPo47hf+FmJPbqrBx3Dc4OsTwkhVwmdzb
      CfEUzCYtVV2lKOH5EeMG6lb5wbTznYl/W0Vh4qZ6qNSRPwwSeMf0OWtdXu89QEm5
      F5T6GVlSZXBqs1AzuljEbBa9i/ExAenOQBqWow0JeTkWV1AgngIOh5+wBSOHYeaD
      154r0GVaDixcRvB1KC+QzOyHzSUkjlnKzzsY09qiY2Ne6PfXDLm6TCzI6vqtUM19
      dK/uFHtl/UwN9BreR7iElcZUr+c8U8lSFOSm66JmkeI=
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDlzCCAn+gAwIBAgIIfks7M1UA4OowDQYJKoZIhvcNAQELBQAwWTFXMFUGA1UE
      AwxOb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2xvY2FsaG9zdC1y
      ZWNvdmVyeS1zZXJ2aW5nLXNpZ25lckAxNjY1NTA0ODk3MB4XDTIyMTAxMTE2MTQ1
      N1oXDTMyMTAwODE2MTQ1OFowWTFXMFUGA1UEAwxOb3BlbnNoaWZ0LWt1YmUtYXBp
      c2VydmVyLW9wZXJhdG9yX2xvY2FsaG9zdC1yZWNvdmVyeS1zZXJ2aW5nLXNpZ25l
      ckAxNjY1NTA0ODk3MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA70nx
      R0LL9lcuXjtZoAIdPQBb4pHxv2d2ClCxNsWTnQYiMPL6xUlDXLrzLeM21dsmHi7h
      Kmsxfyk/dkXIO5v8j1EA52L0hMUTVaxxisZo9WCAimDuwIhkDffhYKyXxztB75A5
      OheKWWdq+HioM3cDhRZi9ifPv10PfPpKPK660bCOzQDJXnvrgI8P3OdjCILzu0ZL
      GVJiqFJX8gHt+I7EaWRsZZmomhmwdg28j/MevgYoF91aTXK9skbaEEjABtgytRqQ
      udTM1lS8G6A/ezOEkobJxKk65FQ9Gld0Wc36BVA85v+EiXK7selhHTozueo34nLP
      gwRJUU11Pw2PI6vyfwIDAQABo2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/
      BAUwAwEB/zAdBgNVHQ4EFgQUybhbyl062rBbI8U++BRyn6Ufx1kwHwYDVR0jBBgw
      FoAUybhbyl062rBbI8U++BRyn6Ufx1kwDQYJKoZIhvcNAQELBQADggEBAD8ZXhK4
      7GJLcjRCTNFCuOoZoxniIFePyz+vywNk+nVADNbWHsbTYPr5lrdqNumzop7uQhj5
      m0gBnEq9WFQvf8aYrkm3Y+qxs8+MyioshINFzNIej3EcE1qBmh84IjiHE9YWjYCe
      WKKNMRZopFx9ZAY3Qky8zgAPKKE8P7xTvHdNKV8T80qgei74D810niig8rwmthOU
      KcDbcigPykla3bJ3hEQCQI0Y0xLzptEZMb8jlSVlfVx/WAuyfVnPSRBHwyey3gpQ
      sXuMng2EzLIaODEuoRRHgTEfqRT1d20+rCXz/XQTsCHjtn3Yx6Nu44FO6oTm1sAb
      XQOxjoXGgUv7o2M=
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDbzCCAlegAwIBAgIIY75bKNpoEAEwDQYJKoZIhvcNAQELBQAwJjEkMCIGA1UE
      AwwbaW5ncmVzcy1vcGVyYXRvckAxNjY1NTA1MDM5MB4XDTIyMTAxMTE2MTg1N1oX
      DTI0MTAxMDE2MTg1OFowJzElMCMGA1UEAwwcKi5hcHBzLm9zdGVzdC5zaGlmdHN0
      YWNrLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKJA0zaaSN20
      Q5BTuruRbaGcTbybOdVWiYrmi8PrgXnk8obLF4W4Bmtsb/wpdc5M5BAP/rZtl4WF
      FlAfynzuPWlEIbMwgfFlKVG7l1gWWGmUvUnSev713+dfEQyFSgKYVH/AxkpzOn1f
      dONQ6vJ4QzmKAUpm7Bp00SuVvY0UL1+5jzv1SVpohyJ4UmYQuOOpjkMPoJYqLPNF
      cM6U910MyqViK7UH0NyNMB0Mh19byJvBlhfRLHw7Fvw+sPtnQN7iabTIHphaSrZI
      tDdFzLLtf+PMbLl6w5k18ZicH9J5EPyPuDz/zLkMDKaSpTr8CsCzwyMceM9IwTBC
      TDcIU8C8fH8CAwEAAaOBnzCBnDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYI
      KwYBBQUHAwEwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUNSJv6olBlRnaqXAwdaZy
      sp7dGLMwHwYDVR0jBBgwFoAUD1SAeJJkWGq+U06gBT1344dhVlgwJwYDVR0RBCAw
      HoIcKi5hcHBzLm9zdGVzdC5zaGlmdHN0YWNrLmNvbTANBgkqhkiG9w0BAQsFAAOC
      AQEAj/YFuJJPU3E/VansQjzpWhFVOjbaplfaYn1gvsEyokQnuxAAOzAfqvjnEHrU
      xVVJV13ckcjJ7VIUUy5wGf7CgJRLXPbjJBtOBDm2WyIf0qULQKG+tJ67+eh81BWq
      DnIrpL8QbiPzl9ufkbQCTifeli2yPiyNepn5d4b+RdhGVPS9sLZiU3SBqa5Tavtl
      T/HNrqWf+0F/yTtmIKs00d5lN5+/8bJcds2S4g9C2dqeIMLZnmVTgD1H9Ky17B1J
      /SRnHd1THpQ3HiCg/aPzlyT2S9kswzzo0DA8WFtuD1pbMeERPWu0gSJtUmGu+htr
      3HAqITRplOUs+7rAvSG/ZbRyaQ==
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDDDCCAfSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtpbmdy
      ZXNzLW9wZXJhdG9yQDE2NjU1MDUwMzkwHhcNMjIxMDExMTYxNzE4WhcNMjQxMDEw
      MTYxNzE5WjAmMSQwIgYDVQQDDBtpbmdyZXNzLW9wZXJhdG9yQDE2NjU1MDUwMzkw
      ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZk9YqsZXxy/YkoT+RarcI
      Ko20B7xhiThks1rVncJ2HBUo8V3hurUO5tOrAAbIeMYj/GzdllCciTAhgpV65lGg
      GwklkBuRSp8rhqrsqpePoNbyLiHg97Pv5PDcrpwfvVBd3kPPQhgpWNaNIctNBQMD
      fSBQqbW+Qq0/mOcqVRmew9LRr9VDY/FH9mjk1s5kp/d7YdpveTf7o9Ay6tW/Jmm+
      An8CteDngHcDT03etReUOZvhSb9yt52Wry8uisfdmZmNZ0ZMNSVJWctTWjSsknhW
      1gHpDPWNlz7DKYrzjaKt5U2WYmQ7gNeZ4MOJHzx5FNvjc9y3oDYN/WKQxbQ/dAdN
      AgMBAAGjRTBDMA4GA1UdDwEB/wQEAwICpDASBgNVHRMBAf8ECDAGAQH/AgEAMB0G
      A1UdDgQWBBQPVIB4kmRYar5TTqAFPXfjh2FWWDANBgkqhkiG9w0BAQsFAAOCAQEA
      VscU7ev2DCrEl8qxDhgqCZesY+i2HmQPS6lMm/kvwpXskDnSJtt5y9WJrY0OnOdc
      W2MDcDSbMckZ8ripMFPIfETtuCCAJTnkGa31eNOB4VvqeTf0LDJtK/zAUVKDvd8K
      Yc3dDeutLpwAJwwSLeQrEw2FTVfWp4RY82OqHiXvoihIYlTSfmgrMMXylPpCHY+l
      ZvC144hMh/TV3W+xyJmh0EQ3LBE4zLqFv2ysyQ4o6lhwdmFPAmEJ37oc6tb3ZKQA
      VpfACCP/POIw45BPmeBkggEw9KjpLyB1K1G8wvDgeOTSBTK7in801xsA9ckosS7F
      a3dfOThY2ElYs2djq3Dr1w==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    annotations:
      kubernetes.io/description: Contains a CA bundle that can be used to verify the
        kube-apiserver when using internal endpoints such as the internal service
        IP or kubernetes.default.svc. No other usage is guaranteed across distributions
        of Kubernetes clusters.
    creationTimestamp: "2022-10-13T10:22:39Z"
    name: kube-root-ca.crt
    namespace: e2e-test-new-app-wckrp
    resourceVersion: "950749"
    uid: fcf82760-b2a0-47b3-8ed1-b4cee9f636a3
- apiVersion: v1
  data:
    service-ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE
      Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe
      Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z
      aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG
      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt
      HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2
      C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf
      9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m
      mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02
      k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh
      MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313
      5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6
      STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn
      6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3
      tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM
      CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e
      zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB
      MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    annotations:
      service.beta.openshift.io/inject-cabundle: "true"
    creationTimestamp: "2022-10-13T10:22:39Z"
    name: openshift-service-ca.crt
    namespace: e2e-test-new-app-wckrp
    resourceVersion: "950761"
    uid: b9f8476c-e2d5-4c9a-879e-0f67d104c4a2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
Oct 13 10:25:02.794: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config describe pod/a234567890123456789012345678901234567890123456789012345678-1-build -n e2e-test-new-app-wckrp'
Oct 13 10:25:03.024: INFO: Describing pod "a234567890123456789012345678901234567890123456789012345678-1-build"
Name:         a234567890123456789012345678901234567890123456789012345678-1-build
Namespace:    e2e-test-new-app-wckrp
Priority:     0
Node:         ostest-n5rnf-worker-0-94fxs/10.196.2.169
Start Time:   Thu, 13 Oct 2022 10:22:56 +0000
Labels:       openshift.io/build.name=a234567890123456789012345678901234567890123456789012345678-1
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.165.125"
                    ],
                    "mac": "fa:16:3e:31:30:74",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.165.125"
                    ],
                    "mac": "fa:16:3e:31:30:74",
                    "default": true,
                    "dns": {}
                }]
              openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1
              openshift.io/scc: privileged
Status:       Failed
IP:           10.128.165.125
IPs:
  IP:           10.128.165.125
Controlled By:  Build/a234567890123456789012345678901234567890123456789012345678-1
Init Containers:
  git-clone:
    Container ID:  cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-git-clone
      --loglevel=5
    State:      Terminated
      Reason:   Error
      Message:  value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
Cloning "https://github.com/sclorg/nodejs-ex" ...
I1013 10:23:23.416849       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:23.416865       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:39.417875       1 repository.go:545] Command execution timed out after 16s
WARNING: timed out waiting for git server, will wait 1m4s
I1013 10:23:39.418170       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:39.418214       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:59.503331       1 repository.go:541] Error executing command: exit status 128
I1013 10:23:59.503536       1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com

      Exit Code:    1
      Started:      Thu, 13 Oct 2022 10:23:23 +0000
      Finished:     Thu, 13 Oct 2022 10:24:59 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
                                    
      LANG:                         C.utf8
      SOURCE_REPOSITORY:            https://github.com/sclorg/nodejs-ex
      SOURCE_URI:                   https://github.com/sclorg/nodejs-ex
      BUILD_LOGLEVEL:               5
      ALLOWED_UIDS:                 1-
      DROP_CAPS:                    KILL,MKNOD,SETGID,SETUID
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx97v (ro)
  manage-dockerfile:
    Container ID:  
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-manage-dockerfile
      --loglevel=5
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
                                    
      LANG:                         C.utf8
      SOURCE_REPOSITORY:            https://github.com/sclorg/nodejs-ex
      SOURCE_URI:                   https://github.com/sclorg/nodejs-ex
      BUILD_LOGLEVEL:               5
      ALLOWED_UIDS:                 1-
      DROP_CAPS:                    KILL,MKNOD,SETGID,SETUID
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx97v (ro)
Containers:
  sti-build:
    Container ID:  
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-sti-build
      --loglevel=5
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
                                    
      LANG:                         C.utf8
      SOURCE_REPOSITORY:            https://github.com/sclorg/nodejs-ex
      SOURCE_URI:                   https://github.com/sclorg/nodejs-ex
      BUILD_LOGLEVEL:               5
      ALLOWED_UIDS:                 1-
      DROP_CAPS:                    KILL,MKNOD,SETGID,SETUID
      PUSH_DOCKERCFG_PATH:          /var/run/secrets/openshift.io/push
      PULL_DOCKERCFG_PATH:          /var/run/secrets/openshift.io/pull
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_STORAGE_DRIVER:         overlay
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/lib/containers/cache from buildcachedir (rw)
      /var/lib/containers/storage from container-storage-root (rw)
      /var/lib/kubelet/config.json from node-pullsecrets (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx97v (ro)
      /var/run/secrets/openshift.io/pull from builder-dockercfg-xsbfr-pull (ro)
      /var/run/secrets/openshift.io/push from builder-dockercfg-xsbfr-push (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  node-pullsecrets:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/config.json
    HostPathType:  File
  buildcachedir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containers/cache
    HostPathType:  
  buildworkdir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  builder-dockercfg-xsbfr-push:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-xsbfr
    Optional:    false
  builder-dockercfg-xsbfr-pull:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-xsbfr
    Optional:    false
  build-system-configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      a234567890123456789012345678901234567890123456789012345678-1-sys-config
    Optional:  false
  build-ca-bundles:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      a234567890123456789012345678901234567890123456789012345678-1-ca
    Optional:  false
  build-proxy-ca-bundles:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      a234567890123456789012345678901234567890123456789012345678-1-global-ca
    Optional:  false
  container-storage-root:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  build-blob-cache:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-lx97v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       2m6s  default-scheduler  Successfully assigned e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1-build to ostest-n5rnf-worker-0-94fxs
  Normal  AddedInterface  101s  multus             Add eth0 [10.128.165.125/23] from kuryr
  Normal  Pulled          100s  kubelet            Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
  Normal  Created         100s  kubelet            Created container git-clone
  Normal  Started         100s  kubelet            Started container git-clone


Oct 13 10:25:03.024: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c git-clone -n e2e-test-new-app-wckrp'
Oct 13 10:25:03.276: INFO: Log for pod "a234567890123456789012345678901234567890123456789012345678-1-build"/"git-clone"
---->
I1013 10:23:23.406661       1 builder.go:393] openshift-builder 4.9.0-202210061647.p0.g1a32676.assembly.stream-1a32676
I1013 10:23:23.406922       1 builder.go:393] Powered by buildah v1.22.4
I1013 10:23:23.415787       1 builder.go:394] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}}
Cloning "https://github.com/sclorg/nodejs-ex" ...
I1013 10:23:23.416849       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:23.416865       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:39.417875       1 repository.go:545] Command execution timed out after 16s
WARNING: timed out waiting for git server, will wait 1m4s
I1013 10:23:39.418170       1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:39.418214       1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex
I1013 10:23:59.503331       1 repository.go:541] Error executing command: exit status 128
I1013 10:23:59.503536       1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com
<----end of log for "a234567890123456789012345678901234567890123456789012345678-1-build"/"git-clone"

Oct 13 10:25:03.277: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c manage-dockerfile -n e2e-test-new-app-wckrp'
Oct 13 10:25:03.490: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c manage-dockerfile -n e2e-test-new-app-wckrp:
StdOut>
Error from server (BadRequest): container "manage-dockerfile" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing
StdErr>
Error from server (BadRequest): container "manage-dockerfile" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing

Oct 13 10:25:03.490: INFO: Error retrieving logs for pod "a234567890123456789012345678901234567890123456789012345678-1-build"/"manage-dockerfile": exit status 1


Oct 13 10:25:03.490: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c sti-build -n e2e-test-new-app-wckrp'
Oct 13 10:25:03.692: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c sti-build -n e2e-test-new-app-wckrp:
StdOut>
Error from server (BadRequest): container "sti-build" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing
StdErr>
Error from server (BadRequest): container "sti-build" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing

Oct 13 10:25:03.692: INFO: Error retrieving logs for pod "a234567890123456789012345678901234567890123456789012345678-1-build"/"sti-build": exit status 1


Oct 13 10:25:03.692: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a234567890123456789012345678901234567890123456789012345678 -o yaml'
Oct 13 10:25:03.915: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a234567890123456789012345678901234567890123456789012345678 -o yaml:
StdOut>
Error from server (NotFound): deploymentconfigs.apps.openshift.io "a234567890123456789012345678901234567890123456789012345678" not found
StdErr>
Error from server (NotFound): deploymentconfigs.apps.openshift.io "a234567890123456789012345678901234567890123456789012345678" not found

Oct 13 10:25:03.915: INFO: Error getting Deployment Config a234567890123456789012345678901234567890123456789012345678: exit status 1
Oct 13 10:25:03.915: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a2345678901234567890123456789012345678901234567890123456789 -o yaml'
Oct 13 10:25:04.107: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a2345678901234567890123456789012345678901234567890123456789 -o yaml:
StdOut>
Error from server (NotFound): deploymentconfigs.apps.openshift.io "a2345678901234567890123456789012345678901234567890123456789" not found
StdErr>
Error from server (NotFound): deploymentconfigs.apps.openshift.io "a2345678901234567890123456789012345678901234567890123456789" not found

Oct 13 10:25:04.107: INFO: Error getting Deployment Config a2345678901234567890123456789012345678901234567890123456789: exit status 1
[AfterEach] [sig-builds][Feature:Builds] oc new-app
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-new-app-wckrp".
STEP: Found 18 events.
Oct 13 10:25:04.145: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: { } Scheduled: Successfully assigned e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1-build to ostest-n5rnf-worker-0-94fxs
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:55 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678: {deployment-controller } ScalingReplicaSet: Scaled up replica set a234567890123456789012345678901234567890123456789012345678-fb95dd4dc to 1
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:55 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678tb4vg" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678fx8fg" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678bhqgn" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678h7l7w" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a2345678901234567890123456789012345678901234567890123456788zv8b" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678nzlgz" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678zxbsb" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a2345678901234567890123456789012345678901234567890123456789w7gh" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678tgvpf" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:58 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: (combined from similar events): Error creating: Pod "a234567890123456789012345678901234567890123456789012345678gkclr" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {multus } AddedInterface: Add eth0 [10.128.165.125/23] from kuryr
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1: {build-controller } BuildStarted: Build e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1 is now running
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container git-clone
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container git-clone
Oct 13 10:25:04.145: INFO: At 2022-10-13 10:24:59 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1: {build-controller } BuildFailed: Build e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1 failed
Oct 13 10:25:04.164: INFO: POD                                                                 NODE                         PHASE   GRACE  CONDITIONS
Oct 13 10:25:04.164: INFO: a234567890123456789012345678901234567890123456789012345678-1-build  ostest-n5rnf-worker-0-94fxs  Failed         [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC ContainersNotInitialized containers with incomplete status: [git-clone manage-dockerfile]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC ContainersNotReady containers with unready status: [sti-build]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC ContainersNotReady containers with unready status: [sti-build]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC  }]
Oct 13 10:25:04.164: INFO: 
Oct 13 10:25:04.183: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:25:04.221: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-new-app-wckrp-user}, err: <nil>
Oct 13 10:25:04.255: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-new-app-wckrp}, err: <nil>
Oct 13 10:25:04.271: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~RGqZnbDYVSTNS-SqNEdIFwbgliGNgXsN10hXYQeuEWE}, err: <nil>
[AfterEach] [sig-builds][Feature:Builds] oc new-app
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-new-app-wckrp" for this suite.
fail [github.com/openshift/origin/test/extended/builds/new_app.go:68]: Unexpected error:
    <*errors.errorString | 0xc00295bda0>: {
        s: "The build \"a234567890123456789012345678901234567890123456789012345678-1\" status is \"Failed\"",
    }
    The build "a234567890123456789012345678901234567890123456789012345678-1" status is "Failed"
occurred

Stderr
_sig-cli__oc_debug_deployment_configs_from_a_build__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 235.0s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_authorize_URL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 184.0s

_sig-cli__oc_adm_node-logs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 6.6s

_sig-imageregistry__Feature_Image__oc_tag_should_work_when_only_imagestreams_api_is_available__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.7s

_sig-imageregistry__Feature_ImageAppend__Image_append_should_create_images_by_appending_them__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 65.0s

Failed:
fail [k8s.io/kubernetes@v1.22.1/test/e2e/framework/pods.go:212]: wait for pod "append-test" to succeed
Expected success, but got an error:
    <*errors.errorString | 0xc002424430>: {
        s: "pod \"append-test\" failed with reason: \"\", message: \"\"",
    }
    pod "append-test" failed with reason: "", message: ""

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-imageregistry][Feature:ImageAppend] Image append
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-imageregistry][Feature:ImageAppend] Image append
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:22:28.588: INFO: configPath is now "/tmp/configfile1392854775"
Oct 13 10:22:28.588: INFO: The user is now "e2e-test-image-append-brffw-user"
Oct 13 10:22:28.588: INFO: Creating project "e2e-test-image-append-brffw"
Oct 13 10:22:28.788: INFO: Waiting on permissions in project "e2e-test-image-append-brffw" ...
Oct 13 10:22:28.796: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:22:28.909: INFO: Waiting for service account "default" secrets (default-token-q7nlh) to include dockercfg/token ...
Oct 13 10:22:29.025: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:22:29.147: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:22:29.255: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:22:29.262: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:22:29.270: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:22:29.813: INFO: Project "e2e-test-image-append-brffw" has been fully provisioned.
[It] should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/images/append.go:83
Oct 13 10:22:29.910: INFO: Waiting up to 3m0s for pod "append-test" in namespace "e2e-test-image-append-brffw" to be "Succeeded or Failed"
Oct 13 10:22:29.930: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.023557ms
Oct 13 10:22:31.947: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036883447s
Oct 13 10:22:33.966: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055741039s
Oct 13 10:22:35.973: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062898309s
Oct 13 10:22:37.978: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0678164s
Oct 13 10:22:39.989: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078382749s
Oct 13 10:22:41.993: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.082545891s
Oct 13 10:22:44.003: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093114235s
Oct 13 10:22:46.013: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.103008252s
Oct 13 10:22:48.020: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.109845409s
Oct 13 10:22:50.031: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120782622s
Oct 13 10:22:52.042: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.131599187s
Oct 13 10:22:54.064: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.154090365s
Oct 13 10:22:56.077: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.166415295s
Oct 13 10:22:58.086: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.176048625s
Oct 13 10:23:00.101: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 30.190695849s
Oct 13 10:23:02.111: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 32.200295818s
Oct 13 10:23:04.126: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 34.215567369s
Oct 13 10:23:06.130: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 36.219889037s
Oct 13 10:23:08.136: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 38.225965366s
Oct 13 10:23:10.145: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 40.234551287s
Oct 13 10:23:12.158: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 42.247262844s
Oct 13 10:23:14.162: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 44.25202662s
Oct 13 10:23:16.168: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 46.257989906s
Oct 13 10:23:18.176: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 48.265587247s
Oct 13 10:23:20.189: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 50.279241766s
Oct 13 10:23:22.201: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 52.290266778s
Oct 13 10:23:24.209: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 54.298909672s
Oct 13 10:23:26.225: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 56.314652881s
Oct 13 10:23:28.232: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 58.322006445s
Oct 13 10:23:30.239: INFO: Pod "append-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.328381651s
Oct 13 10:23:32.246: INFO: Pod "append-test": Phase="Failed", Reason="", readiness=false. Elapsed: 1m2.335469454s
[AfterEach] [sig-imageregistry][Feature:ImageAppend] Image append
  github.com/openshift/origin/test/extended/images/append.go:75
Oct 13 10:23:32.274: INFO: Running 'oc --namespace=e2e-test-image-append-brffw --kubeconfig=.kube/config describe pod/append-test -n e2e-test-image-append-brffw'
Oct 13 10:23:32.496: INFO: Describing pod "append-test"
Name:         append-test
Namespace:    e2e-test-image-append-brffw
Priority:     0
Node:         ostest-n5rnf-worker-0-8kq82/10.196.2.72
Start Time:   Thu, 13 Oct 2022 10:22:29 +0000
Labels:       <none>
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.182.59"
                    ],
                    "mac": "fa:16:3e:43:63:2c",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.182.59"
                    ],
                    "mac": "fa:16:3e:43:63:2c",
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: anyuid
Status:       Failed
IP:           10.128.182.59
IPs:
  IP:  10.128.182.59
Containers:
  test:
    Container ID:  cri-o://9859e2ed4d4b1b0c5220ecbcf3b71919d2946354c918a298dd2cf3e3bc743f53
    Image:         image-registry.openshift-image-registry.svc:5000/openshift/tools:latest
    Image ID:      image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      set -euo pipefail; set -x
      
      # create a scratch image with fixed date
      oc image append --insecure --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch1 --image='{"Cmd":["/bin/sleep"]}' --created-at=0
      
      # create a second scratch image with fixed date
      oc image append --insecure --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch2 --image='{"Cmd":["/bin/sleep"]}' --created-at=0
      
      # modify a shell image
      oc image append --insecure --from image-registry.openshift-image-registry.svc:5000/openshift/tools:latest --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:busybox1 --image '{"Cmd":["/bin/sleep"]}'
      
      # verify mounting works
      oc create is test2
      oc image append --insecure --from image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch2 --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test2:scratch2 --force
      
      # add a simple layer to the image
      mkdir -p /tmp/test/dir
      touch /tmp/test/1
      touch /tmp/test/dir/2
      tar cvzf /tmp/layer.tar.gz -C /tmp/test/ .
      oc image append --insecure --from=image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:busybox1 --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:busybox2 /tmp/layer.tar.gz
      
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 13 Oct 2022 10:22:58 +0000
      Finished:     Thu, 13 Oct 2022 10:23:28 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      HOME:  /secret
    Mounts:
      /secret/.dockercfg from pull-secret (rw,path=".dockercfg")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5frh2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  pull-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-5wtd9
    Optional:    false
  kube-api-access-5frh2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       62s   default-scheduler  Successfully assigned e2e-test-image-append-brffw/append-test to ostest-n5rnf-worker-0-8kq82
  Normal  AddedInterface  35s   multus             Add eth0 [10.128.182.59/23] from kuryr
  Normal  Pulling         35s   kubelet            Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest"
  Normal  Pulled          35s   kubelet            Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" in 74.854252ms
  Normal  Created         34s   kubelet            Created container test
  Normal  Started         34s   kubelet            Started container test


Oct 13 10:23:32.496: INFO: Running 'oc --namespace=e2e-test-image-append-brffw --kubeconfig=.kube/config logs pod/append-test -c test -n e2e-test-image-append-brffw'
Oct 13 10:23:32.652: INFO: Log for pod "append-test"/"test"
---->
+ oc image append --insecure --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch1 '--image={"Cmd":["/bin/sleep"]}' --created-at=0
Uploading ... failed
Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
<----end of log for "append-test"/"test"

[AfterEach] [sig-imageregistry][Feature:ImageAppend] Image append
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-image-append-brffw".
STEP: Found 6 events.
Oct 13 10:23:32.658: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for append-test: { } Scheduled: Successfully assigned e2e-test-image-append-brffw/append-test to ostest-n5rnf-worker-0-8kq82
Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for append-test: {multus } AddedInterface: Add eth0 [10.128.182.59/23] from kuryr
Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest"
Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" in 74.854252ms
Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:58 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container test
Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:58 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container test
Oct 13 10:23:32.676: INFO: POD          NODE                         PHASE   GRACE  CONDITIONS
Oct 13 10:23:32.676: INFO: append-test  ostest-n5rnf-worker-0-8kq82  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:29 +0000 UTC ContainersNotReady containers with unready status: [test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:29 +0000 UTC ContainersNotReady containers with unready status: [test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:29 +0000 UTC  }]
Oct 13 10:23:32.676: INFO: 
Oct 13 10:23:32.684: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:23:32.860: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-image-append-brffw-user}, err: <nil>
Oct 13 10:23:32.881: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-image-append-brffw}, err: <nil>
Oct 13 10:23:32.896: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~roCUBuZvnpDyr56M8gjVPOdb1nOvLE_NUQgFvLll-tw}, err: <nil>
[AfterEach] [sig-imageregistry][Feature:ImageAppend] Image append
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-image-append-brffw" for this suite.
fail [k8s.io/kubernetes@v1.22.1/test/e2e/framework/pods.go:212]: wait for pod "append-test" to succeed
Expected success, but got an error:
    <*errors.errorString | 0xc002424430>: {
        s: "pod \"append-test\" failed with reason: \"\", message: \"\"",
    }
    pod "append-test" failed with reason: "", message: ""

Stderr
_sig-cli__oc_adm_ui-project-commands__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 7.8s

_sig-cluster-lifecycle__Feature_Machines__Managed_cluster_should_have_machine_resources__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.8s

_sig-cli__CLI_can_run_inside_of_a_busybox_container__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 30.2s

_sig-network__Feature_Router__The_HAProxy_router_should_support_reencrypt_to_services_backed_by_a_serving_certificate_automatically__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 117.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_user.openshift.io/v1,_Resource=groups__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-builds__Feature_Builds__prune_builds_based_on_settings_in_the_buildconfig__should_prune_errored_builds_based_on_the_failedBuildsHistoryLimit_setting__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 39.9s

_sig-imageregistry__Feature_ImageInfo__Image_info_should_display_information_about_images__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 78.0s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_token_request_URL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 115.0s

_sig-auth__Feature_ProjectAPI___TestProjectWatch_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 22.5s

_sig-network__Feature_Router__The_HAProxy_router_should_serve_the_correct_routes_when_scoped_to_a_single_namespace_and_label_set__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 123.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_oauth.openshift.io/v1,_Resource=oauthaccesstokens__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.7s

_sig-builds__Feature_Builds__oc_new-app__should_succeed_with_an_imagestream__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 18.8s

_sig-cli__oc_adm_must-gather_when_looking_at_the_audit_logs__sig-node__kubelet_runs_apiserver_processes_strictly_sequentially_in_order_to_not_risk_audit_log_corruption__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 126.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_image.openshift.io/v1,_Resource=images__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-auth__Feature_OpenShiftAuthorization__self-SAR_compatibility__TestBootstrapPolicySelfSubjectAccessReviews_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.6s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_revision_history_limits_should_never_persist_more_old_deployments_than_acceptable_after_being_observed_by_the_controller__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 517.0s

_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_root_URL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 75.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_oauth.openshift.io/v1,_Resource=oauthauthorizetokens__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-auth__Feature_OpenShiftAuthorization__RBAC_proxy_for_openshift_authz__RunLegacyClusterRoleBindingEndpoint_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.8s

_sig-cli__oc_builds_new-build__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 875.0s

_sig-builds__Feature_Builds__Multi-stage_image_builds_should_succeed__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 153.0s

_sig-auth__Feature_OAuthServer__OAuth_Authenticator_accepts_sha256_access_tokens__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-auth__Feature_OpenShiftAuthorization__authorization__TestClusterReaderCoverage_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

Skipped: skip [github.com/openshift/origin/test/extended/authorization/authorization.go:48]: this test was in integration and didn't cover a real configuration, so it's horribly, horribly wrong now
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-auth][Feature:OpenShiftAuthorization] authorization
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-auth][Feature:OpenShiftAuthorization] authorization
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:19:53.847: INFO: configPath is now "/tmp/configfile3242513760"
Oct 13 10:19:53.847: INFO: The user is now "e2e-test-bootstrap-policy-z2g96-user"
Oct 13 10:19:53.847: INFO: Creating project "e2e-test-bootstrap-policy-z2g96"
Oct 13 10:19:54.132: INFO: Waiting on permissions in project "e2e-test-bootstrap-policy-z2g96" ...
Oct 13 10:19:54.140: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:19:54.255: INFO: Waiting for service account "default" secrets (default-token-cp6sd) to include dockercfg/token ...
Oct 13 10:19:54.349: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:19:54.456: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:19:54.573: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:19:54.594: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:19:54.602: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:19:55.134: INFO: Project "e2e-test-bootstrap-policy-z2g96" has been fully provisioned.
[It] should succeed [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/authorization/authorization.go:47
[AfterEach] [sig-auth][Feature:OpenShiftAuthorization] authorization
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:19:55.180: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-bootstrap-policy-z2g96-user}, err: <nil>
Oct 13 10:19:55.220: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-bootstrap-policy-z2g96}, err: <nil>
Oct 13 10:19:55.249: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~R_SlYrNvyTjtuD9RyTm427KAWERS55tnzQNihXu9mKE}, err: <nil>
[AfterEach] [sig-auth][Feature:OpenShiftAuthorization] authorization
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-bootstrap-policy-z2g96" for this suite.
skip [github.com/openshift/origin/test/extended/authorization/authorization.go:48]: this test was in integration and didn't cover a real configuration, so it's horribly, horribly wrong now

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_minimum_ready_seconds_set_should_not_transition_the_deployment_to_Complete_before_satisfied__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 217.0s

_sig-imageregistry__Feature_ImageLookup__Image_policy_should_perform_lookup_when_the_object_has_the_resolve-names_annotation__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 9.9s

_sig-cli__oc_adm_user-creation__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.3s

_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Create_a_rolebinding_when_subject_is_already_bound_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

_sig-cli__oc_debug_dissect_deployment_config_debug__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 63.0s

_sig-auth__Feature_HTPasswdAuth__HTPasswd_IDP_should_successfully_configure_htpasswd_and_be_responsive__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 48.2s

_sig-builds__Feature_Builds__buildconfig_secret_injector__should_inject_secrets_to_the_appropriate_buildconfigs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.6s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_paused_should_disable_actions_on_deployments__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 55.9s

_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Create_a_rolebinding_when_subject_is_not_already_bound_and_is_not_permitted_by_any_RBR_should_fail__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.8s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_user.openshift.io/v1,_Resource=identities__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.4s

_sig-network__Feature_Router__The_HAProxy_router_should_serve_routes_that_were_created_from_an_ingress__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 52.9s

_sig-apps__Feature_OpenShiftControllerManager__TestDeployScale__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.7s

_sig-arch__ClusterOperators_should_define_at_least_one_related_object_that_is_not_a_namespace__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-devex__Feature_Templates__templateservicebroker_security_test__should_pass_security_tests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.9s

Skipped: skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:50]: The template service broker is not installed: services "apiserver" not found
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker security test
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-devex][Feature:Templates] templateservicebroker security test
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:18:45.926: INFO: configPath is now "/tmp/configfile2207960907"
Oct 13 10:18:45.926: INFO: The user is now "e2e-test-templates-c2f4k-user"
Oct 13 10:18:45.926: INFO: Creating project "e2e-test-templates-c2f4k"
Oct 13 10:18:46.092: INFO: Waiting on permissions in project "e2e-test-templates-c2f4k" ...
Oct 13 10:18:46.102: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:18:46.220: INFO: Waiting for service account "default" secrets (default-dockercfg-6qm6t,default-dockercfg-6qm6t) to include dockercfg/token ...
Oct 13 10:18:46.309: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:18:46.418: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:18:46.534: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:18:46.559: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:18:46.572: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:18:47.116: INFO: Project "e2e-test-templates-c2f4k" has been fully provisioned.
[JustBeforeEach] [sig-devex][Feature:Templates] templateservicebroker security test
  github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:46
Oct 13 10:18:47.132: INFO: The template service broker is not installed: services "apiserver" not found
[AfterEach] 
  github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:151
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker security test
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:18:47.168: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-templates-c2f4k-user}, err: <nil>
Oct 13 10:18:47.194: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-templates-c2f4k}, err: <nil>
Oct 13 10:18:47.234: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~qqtKc6EKrzx3LZ0THxXocGf2q-yVs_QFGM2Cw7qE4nE}, err: <nil>
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker security test
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-templates-c2f4k" for this suite.
[AfterEach] [sig-devex][Feature:Templates] templateservicebroker security test
  github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:78
skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:50]: The template service broker is not installed: services "apiserver" not found

Stderr
_sig-instrumentation__Prometheus_when_installed_on_the_cluster_should_have_a_AlertmanagerReceiversNotConfigured_alert_in_firing_state__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 106.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:425]: Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "promQL query returned unexpected results:\nALERTS{alertstate=~\"firing|pending\",alertname=\"AlertmanagerReceiversNotConfigured\"} == 1\n[]",
        },
    ]
    promQL query returned unexpected results:
    ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
    []
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[It] should have a AlertmanagerReceiversNotConfigured alert in firing state [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:414
Oct 13 10:18:47.799: INFO: Creating namespace "e2e-test-prometheus-rgcjs"
Oct 13 10:18:48.082: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:18:48.208: INFO: Creating new exec pod
STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
Oct 13 10:19:38.429: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"'
Oct 13 10:19:38.930: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n"
Oct 13 10:19:38.930: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
Oct 13 10:19:48.936: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"'
Oct 13 10:19:49.360: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n"
Oct 13 10:19:49.360: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
Oct 13 10:19:59.364: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"'
Oct 13 10:19:59.770: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n"
Oct 13 10:19:59.770: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
Oct 13 10:20:09.771: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"'
Oct 13 10:20:10.208: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n"
Oct 13 10:20:10.208: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
Oct 13 10:20:20.209: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"'
Oct 13 10:20:20.571: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n"
Oct 13 10:20:20.571: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-rgcjs".
STEP: Found 6 events.
Oct 13 10:20:30.617: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-rgcjs/execpod to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_execpod_e2e-test-prometheus-rgcjs_aa828229-4e80-481e-91d8-9da6b7d5b4b3_0(b1197de2b83f76ff87129fc7d36e6d651057920735675d6f4561d82aebc9aa8a): error adding pod e2e-test-prometheus-rgcjs_execpod to CNI network "multus-cni-network": [e2e-test-prometheus-rgcjs/execpod/aa828229-4e80-481e-91d8-9da6b7d5b4b3:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?; Post "http://localhost:5036/addNetwork": EOF
Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.159.171/23] from kuryr
Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine
Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 10:20:30.623: INFO: POD      NODE                         PHASE    GRACE  CONDITIONS
Oct 13 10:20:30.623: INFO: execpod  ostest-n5rnf-worker-0-j4pkp  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:48 +0000 UTC  }]
Oct 13 10:20:30.623: INFO: 
Oct 13 10:20:30.636: INFO: skipping dumping cluster info - cluster too large
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-rgcjs" for this suite.
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:425]: Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "promQL query returned unexpected results:\nALERTS{alertstate=~\"firing|pending\",alertname=\"AlertmanagerReceiversNotConfigured\"} == 1\n[]",
        },
    ]
    promQL query returned unexpected results:
    ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1
    []
occurred

Stderr
_sig-api-machinery__Feature_APIServer__TestTLSDefaults__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.7s

Skipped: skip [github.com/openshift/origin/test/extended/apiserver/tls.go:18]: skipping because it was broken in master
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-api-machinery][Feature:APIServer]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-api-machinery][Feature:APIServer]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:18:43.571: INFO: configPath is now "/tmp/configfile4267057803"
Oct 13 10:18:43.571: INFO: The user is now "e2e-test-apiserver-sc9jw-user"
Oct 13 10:18:43.571: INFO: Creating project "e2e-test-apiserver-sc9jw"
Oct 13 10:18:43.789: INFO: Waiting on permissions in project "e2e-test-apiserver-sc9jw" ...
Oct 13 10:18:43.801: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:18:43.917: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:18:44.032: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:18:44.143: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:18:44.152: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:18:44.159: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:18:44.685: INFO: Project "e2e-test-apiserver-sc9jw" has been fully provisioned.
[It] TestTLSDefaults [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/apiserver/tls.go:17
[AfterEach] [sig-api-machinery][Feature:APIServer]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:18:44.699: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-apiserver-sc9jw-user}, err: <nil>
Oct 13 10:18:44.711: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-apiserver-sc9jw}, err: <nil>
Oct 13 10:18:44.722: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~AQS98cMTx386yNULGmkAWvXfzREce70pIBdW9JuJkFA}, err: <nil>
[AfterEach] [sig-api-machinery][Feature:APIServer]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-apiserver-sc9jw" for this suite.
skip [github.com/openshift/origin/test/extended/apiserver/tls.go:18]: skipping because it was broken in master

Stderr
_sig-operator__OLM_should_be_installed_with_installplans_at_version_v1alpha1__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-instrumentation__Prometheus_when_installed_on_the_cluster_should_provide_ingress_metrics__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 113.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:571]: Unexpected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "promQL query returned unexpected results:\ntemplate_router_reload_seconds_count{job=\"router-internal-default\"} >= 1\n[]",
        },
        {
            s: "promQL query returned unexpected results:\nhaproxy_server_up{job=\"router-internal-default\"} >= 1\n[]",
        },
    ]
    [promQL query returned unexpected results:
    template_router_reload_seconds_count{job="router-internal-default"} >= 1
    [], promQL query returned unexpected results:
    haproxy_server_up{job="router-internal-default"} >= 1
    []]
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[It] should provide ingress metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:536
Oct 13 10:18:42.882: INFO: Creating namespace "e2e-test-prometheus-z4ls2"
Oct 13 10:18:43.172: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:18:43.285: INFO: Creating new exec pod
Oct 13 10:19:37.360: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/targets"'
Oct 13 10:19:38.103: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/targets\n"
Oct 13 10:19:38.120: INFO: stdout: "{\"status\":\"success\",\"data\":{\"activeTargets\":[{\"discoveredLabels\":{\"__address__\":\"10.128.97.62:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-apiserver-operator-546f548b78-l7cdh\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-apiserver-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-apiserver-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.97.62\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:eb:3e:cf\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.97.62\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:eb:3e:cf\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-apiserver-operator-546f548b78\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.97.62\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"546f548b78\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-apiserver-operator-546f548b78-l7cdh\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7e82d129-a6cb-4990-a7d9-bc53374a0a30\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-apiserver-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-apiserver-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0\"},\"labels\":{\"container\":\"openshift-apiserver-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.97.62:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-apiserver-operator\",\"pod\":\"openshift-apiserver-operator-546f548b78-l7cdh\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0\",\"scrapeUrl\":\"https://10.128.97.62:8443/metrics\",\"globalUrl\":\"https://10.128.97.62:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.253178155Z\",\"lastScrapeDuration\":0.027244114,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"},\"labels\":{\"container\":\"openshift-apiserver-check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.120.187:17698\",\"job\":\"check-endpoints\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-6sffs\",\"service\":\"check-endpoints\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\",\"scrapeUrl\":\"https://10.128.120.187:17698/metrics\",\"globalUrl\":\"https://10.128.120.187:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.901340962Z\",\"lastScrapeDuration\":0.012753588,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"},\"labels\":{\"container\":\"openshift-apiserver-check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.120.232:17698\",\"job\":\"check-endpoints\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-kctsl\",\"service\":\"check-endpoints\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\",\"scrapeUrl\":\"https://10.128.120.232:17698/metrics\",\"globalUrl\":\"https://10.128.120.232:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.742824904Z\",\"lastScrapeDuration\":0.027122114,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"},\"labels\":{\"container\":\"openshift-apiserver-check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.121.9:17698\",\"job\":\"check-endpoints\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-cwl5l\",\"service\":\"check-endpoints\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\",\"scrapeUrl\":\"https://10.128.121.9:17698/metrics\",\"globalUrl\":\"https://10.128.121.9:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.161852166Z\",\"lastScrapeDuration\":0.01050134,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"},\"labels\":{\"apiserver\":\"openshift-apiserver\",\"container\":\"openshift-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.128.121.9:8443\",\"job\":\"api\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-cwl5l\",\"service\":\"api\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\",\"scrapeUrl\":\"https://10.128.121.9:8443/metrics\",\"globalUrl\":\"https://10.128.121.9:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.443263198Z\",\"lastScrapeDuration\":0.149381955,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"},\"labels\":{\"apiserver\":\"openshift-apiserver\",\"container\":\"openshift-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.128.120.187:8443\",\"job\":\"api\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-6sffs\",\"service\":\"api\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\",\"scrapeUrl\":\"https://10.128.120.187:8443/metrics\",\"globalUrl\":\"https://10.128.120.187:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.865818889Z\",\"lastScrapeDuration\":0.091865915,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"},\"labels\":{\"apiserver\":\"openshift-apiserver\",\"container\":\"openshift-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.128.120.232:8443\",\"job\":\"api\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-kctsl\",\"service\":\"api\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\",\"scrapeUrl\":\"https://10.128.120.232:8443/metrics\",\"globalUrl\":\"https://10.128.120.232:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.185870109Z\",\"lastScrapeDuration\":0.189400723,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.74.228:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"authentication-operator-788b66459f-ddzdg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"authentication-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-authentication-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.74.228\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5e:85:e3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.74.228\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5e:85:e3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"authentication-operator-788b66459f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.74.228\",\"__meta_kubernetes_pod_label_app\":\"authentication-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"788b66459f\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"authentication-operator-788b66459f-ddzdg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"56ea3d02-f1ac-40f9-bc17-195d5e8f43c5\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"authentication-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication-operator/authentication-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.74.228:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-authentication-operator\",\"pod\":\"authentication-operator-788b66459f-ddzdg\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-authentication-operator/authentication-operator/0\",\"scrapeUrl\":\"https://10.128.74.228:8443/metrics\",\"globalUrl\":\"https://10.128.74.228:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:22.790831556Z\",\"lastScrapeDuration\":0.059413039,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.116.141:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"oauth-openshift-7bc4d9f744-kvtwf\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"oauth-openshift\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"oauth-openshift\",\"__meta_kubernetes_namespace\":\"openshift-authentication\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.116.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:32:4d:81\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.116.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:32:4d:81\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_rvs_hash\":\"LhN4C_Fs9e4EBOG_HQKm0RnNParQYltKPI8fdru6ddi1ygGnkCHd59ZZVk38n0YN1dHxwUHSoERB6MLYDRL3xw\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_rvs_hash\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-openshift\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"oauth-openshift-7bc4d9f744\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.116.141\",\"__meta_kubernetes_pod_label_app\":\"oauth-openshift\",\"__meta_kubernetes_pod_label_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bc4d9f744\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"oauth-openshift-7bc4d9f744-kvtwf\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"334a4238-d82f-43ff-8ddb-57da32fac6cb\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"v4-0-config-system-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"oauth-openshift\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"oauth-openshift\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\"},\"labels\":{\"container\":\"oauth-openshift\",\"endpoint\":\"https\",\"instance\":\"10.128.116.141:6443\",\"job\":\"oauth-openshift\",\"namespace\":\"openshift-authentication\",\"pod\":\"oauth-openshift-7bc4d9f744-kvtwf\",\"service\":\"oauth-openshift\"},\"scrapePool\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\",\"scrapeUrl\":\"https://10.128.116.141:6443/metrics\",\"globalUrl\":\"https://10.128.116.141:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.110052972Z\",\"lastScrapeDuration\":0.045585481,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.116.190:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"oauth-openshift-7bc4d9f744-rmcd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"oauth-openshift\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"oauth-openshift\",\"__meta_kubernetes_namespace\":\"openshift-authentication\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.116.190\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b2:42:17\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.116.190\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b2:42:17\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_rvs_hash\":\"LhN4C_Fs9e4EBOG_HQKm0RnNParQYltKPI8fdru6ddi1ygGnkCHd59ZZVk38n0YN1dHxwUHSoERB6MLYDRL3xw\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_rvs_hash\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-openshift\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"oauth-openshift-7bc4d9f744\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.116.190\",\"__meta_kubernetes_pod_label_app\":\"oauth-openshift\",\"__meta_kubernetes_pod_label_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bc4d9f744\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"oauth-openshift-7bc4d9f744-rmcd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fe4a836b-2edb-4051-a184-a493c373cdcf\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"v4-0-config-system-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"oauth-openshift\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"oauth-openshift\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\"},\"labels\":{\"container\":\"oauth-openshift\",\"endpoint\":\"https\",\"instance\":\"10.128.116.190:6443\",\"job\":\"oauth-openshift\",\"namespace\":\"openshift-authentication\",\"pod\":\"oauth-openshift-7bc4d9f744-rmcd6\",\"service\":\"oauth-openshift\"},\"scrapePool\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\",\"scrapeUrl\":\"https://10.128.116.190:6443/metrics\",\"globalUrl\":\"https://10.128.116.190:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.125961796Z\",\"lastScrapeDuration\":0.035315824,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.116.139:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"oauth-openshift-7bc4d9f744-nwqnk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"oauth-openshift\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"oauth-openshift\",\"__meta_kubernetes_namespace\":\"openshift-authentication\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.116.139\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:58:53:4d\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.116.139\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:58:53:4d\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_rvs_hash\":\"LhN4C_Fs9e4EBOG_HQKm0RnNParQYltKPI8fdru6ddi1ygGnkCHd59ZZVk38n0YN1dHxwUHSoERB6MLYDRL3xw\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_rvs_hash\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-openshift\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"oauth-openshift-7bc4d9f744\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.116.139\",\"__meta_kubernetes_pod_label_app\":\"oauth-openshift\",\"__meta_kubernetes_pod_label_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bc4d9f744\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"oauth-openshift-7bc4d9f744-nwqnk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"319ceeed-1af7-4b29-bd77-7844ecad2b19\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"v4-0-config-system-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"oauth-openshift\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"oauth-openshift\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\"},\"labels\":{\"container\":\"oauth-openshift\",\"endpoint\":\"https\",\"instance\":\"10.128.116.139:6443\",\"job\":\"oauth-openshift\",\"namespace\":\"openshift-authentication\",\"pod\":\"oauth-openshift-7bc4d9f744-nwqnk\",\"service\":\"oauth-openshift\"},\"scrapePool\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\",\"scrapeUrl\":\"https://10.128.116.139:6443/metrics\",\"globalUrl\":\"https://10.128.116.139:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.81740541Z\",\"lastScrapeDuration\":0.045039532,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.62.5:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cloud-credential-operator-5dc9b88859-x9ckp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cco-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cloud-credential-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.62.5\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:47:5f:af\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.62.5\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:47:5f:af\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cloud-credential-operator-5dc9b88859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.62.5\",\"__meta_kubernetes_pod_label_app\":\"cloud-credential-operator\",\"__meta_kubernetes_pod_label_control_plane\":\"controller-manager\",\"__meta_kubernetes_pod_label_controller_tools_k8s_io\":\"1.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5dc9b88859\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_control_plane\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_tools_k8s_io\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cloud-credential-operator-5dc9b88859-x9ckp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b04bf08f-5ee4-4230-a764-4b9450a669b0\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cloud-credential-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"cco-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.62.5:8443\",\"job\":\"cco-metrics\",\"namespace\":\"openshift-cloud-credential-operator\",\"pod\":\"cloud-credential-operator-5dc9b88859-x9ckp\",\"service\":\"cco-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0\",\"scrapeUrl\":\"https://10.128.62.5:8443/metrics\",\"globalUrl\":\"https://10.128.62.5:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.468746684Z\",\"lastScrapeDuration\":0.01799071,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"},\"labels\":{\"container\":\"provisioner-kube-rbac-proxy\",\"endpoint\":\"provisioner-m\",\"instance\":\"10.196.0.105:9202\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\",\"scrapeUrl\":\"https://10.196.0.105:9202/metrics\",\"globalUrl\":\"https://10.196.0.105:9202/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:15.247525211Z\",\"lastScrapeDuration\":0.005503444,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"},\"labels\":{\"container\":\"provisioner-kube-rbac-proxy\",\"endpoint\":\"provisioner-m\",\"instance\":\"10.196.3.178:9202\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\",\"scrapeUrl\":\"https://10.196.3.178:9202/metrics\",\"globalUrl\":\"https://10.196.3.178:9202/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.426686928Z\",\"lastScrapeDuration\":0.007045632,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"},\"labels\":{\"container\":\"attacher-kube-rbac-proxy\",\"endpoint\":\"attacher-m\",\"instance\":\"10.196.0.105:9203\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\",\"scrapeUrl\":\"https://10.196.0.105:9203/metrics\",\"globalUrl\":\"https://10.196.0.105:9203/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.157015604Z\",\"lastScrapeDuration\":0.004119542,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"},\"labels\":{\"container\":\"attacher-kube-rbac-proxy\",\"endpoint\":\"attacher-m\",\"instance\":\"10.196.3.178:9203\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\",\"scrapeUrl\":\"https://10.196.3.178:9203/metrics\",\"globalUrl\":\"https://10.196.3.178:9203/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.689467811Z\",\"lastScrapeDuration\":0.007335703,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"},\"labels\":{\"container\":\"resizer-kube-rbac-proxy\",\"endpoint\":\"resizer-m\",\"instance\":\"10.196.0.105:9204\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\",\"scrapeUrl\":\"https://10.196.0.105:9204/metrics\",\"globalUrl\":\"https://10.196.0.105:9204/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.45930964Z\",\"lastScrapeDuration\":0.003662094,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"},\"labels\":{\"container\":\"resizer-kube-rbac-proxy\",\"endpoint\":\"resizer-m\",\"instance\":\"10.196.3.178:9204\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\",\"scrapeUrl\":\"https://10.196.3.178:9204/metrics\",\"globalUrl\":\"https://10.196.3.178:9204/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.527401721Z\",\"lastScrapeDuration\":0.003125686,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"},\"labels\":{\"container\":\"snapshotter-kube-rbac-proxy\",\"endpoint\":\"snapshotter-m\",\"instance\":\"10.196.0.105:9205\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\",\"scrapeUrl\":\"https://10.196.0.105:9205/metrics\",\"globalUrl\":\"https://10.196.0.105:9205/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.370489063Z\",\"lastScrapeDuration\":0.003117199,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"},\"labels\":{\"container\":\"snapshotter-kube-rbac-proxy\",\"endpoint\":\"snapshotter-m\",\"instance\":\"10.196.3.178:9205\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\",\"scrapeUrl\":\"https://10.196.3.178:9205/metrics\",\"globalUrl\":\"https://10.196.3.178:9205/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.608231784Z\",\"lastScrapeDuration\":0.005526959,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-approver-d4748548d-wc7k6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"machine-approver\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-approver\",\"__meta_kubernetes_namespace\":\"openshift-cluster-machine-approver\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-approver-d4748548d\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"machine-approver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"d4748548d\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-approver-d4748548d-wc7k6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5f2960a2-9fac-4af6-a7a2-3acecdf0994c\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-approver-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"machine-approver\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-approver\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:9192\",\"job\":\"machine-approver\",\"namespace\":\"openshift-cluster-machine-approver\",\"pod\":\"machine-approver-d4748548d-wc7k6\",\"service\":\"machine-approver\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0\",\"scrapeUrl\":\"https://10.196.0.105:9192/metrics\",\"globalUrl\":\"https://10.196.0.105:9192/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.449590327Z\",\"lastScrapeDuration\":0.018503683,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.33.187:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-node-tuning-operator-6497f89df8-trnb7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"node-tuning-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-tuning-operator\",\"__meta_kubernetes_namespace\":\"openshift-cluster-node-tuning-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.33.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:05:4e:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.33.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:05:4e:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-node-tuning-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-node-tuning-operator-6497f89df8\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.33.187\",\"__meta_kubernetes_pod_label_name\":\"cluster-node-tuning-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6497f89df8\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-node-tuning-operator-6497f89df8-trnb7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"56602893-aede-4034-a781-8e61a61108ee\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-tuning-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"node-tuning-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"node-tuning-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0\"},\"labels\":{\"container\":\"cluster-node-tuning-operator\",\"endpoint\":\"60000\",\"instance\":\"10.128.33.187:60000\",\"job\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\",\"pod\":\"cluster-node-tuning-operator-6497f89df8-trnb7\",\"service\":\"node-tuning-operator\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0\",\"scrapeUrl\":\"https://10.128.33.187:60000/metrics\",\"globalUrl\":\"https://10.128.33.187:60000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:18:52.788010494Z\",\"lastScrapeDuration\":0.005562724,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.27.226:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-samples-operator-84c8d6b664-5s6ss\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"cluster-samples-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-samples-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.27.226\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e2:8a:b7\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.27.226\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e2:8a:b7\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-samples-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-samples-operator-84c8d6b664\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.27.226\",\"__meta_kubernetes_pod_label_name\":\"cluster-samples-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"84c8d6b664\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-samples-operator-84c8d6b664-5s6ss\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b74e39c3-0fad-4f9d-a03b-b5f51a1cf857\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"samples-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"cluster-samples-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0\"},\"labels\":{\"container\":\"cluster-samples-operator\",\"endpoint\":\"60000\",\"instance\":\"10.128.27.226:60000\",\"job\":\"metrics\",\"namespace\":\"openshift-cluster-samples-operator\",\"pod\":\"cluster-samples-operator-84c8d6b664-5s6ss\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0\",\"scrapeUrl\":\"https://10.128.27.226:60000/metrics\",\"globalUrl\":\"https://10.128.27.226:60000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:18:44.104524245Z\",\"lastScrapeDuration\":0.010710215,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.52.71:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-storage-operator-769c6b74d9-8rp8q\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-storage-operator-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-storage-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.52.71\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fa:c9:ff\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.52.71\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fa:c9:ff\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-storage-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-storage-operator-769c6b74d9\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.52.71\",\"__meta_kubernetes_pod_label_name\":\"cluster-storage-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"769c6b74d9\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-storage-operator-769c6b74d9-8rp8q\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4281d8e7-f78d-47b3-bcc8-e4e74080e804\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-storage-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-storage-operator-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-storage-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"},\"labels\":{\"container\":\"cluster-storage-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.52.71:8443\",\"job\":\"cluster-storage-operator-metrics\",\"namespace\":\"openshift-cluster-storage-operator\",\"pod\":\"cluster-storage-operator-769c6b74d9-8rp8q\",\"service\":\"cluster-storage-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\",\"scrapeUrl\":\"https://10.128.52.71:8443/metrics\",\"globalUrl\":\"https://10.128.52.71:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.581041121Z\",\"lastScrapeDuration\":0.023721771,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9099\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-version-operator-765fc9d8cb-86btb\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-version-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-version-operator\",\"__meta_kubernetes_namespace\":\"openshift-cluster-version\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-version-operator-765fc9d8cb\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-version-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"765fc9d8cb\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-version-operator-765fc9d8cb-86btb\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8657bbe8-1946-4615-a11a-753ad48ee115\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-version-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-version-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-version-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-version/cluster-version-operator/0\"},\"labels\":{\"endpoint\":\"metrics\",\"instance\":\"10.196.3.187:9099\",\"job\":\"cluster-version-operator\",\"namespace\":\"openshift-cluster-version\",\"pod\":\"cluster-version-operator-765fc9d8cb-86btb\",\"service\":\"cluster-version-operator\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-version/cluster-version-operator/0\",\"scrapeUrl\":\"https://10.196.3.187:9099/metrics\",\"globalUrl\":\"https://10.196.3.187:9099/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.601477518Z\",\"lastScrapeDuration\":0.017908332,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.73.213:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-config-operator-5654d7f9fc-dr2kj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-config-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-config-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.73.213\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:3a:75:7b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.73.213\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:3a:75:7b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-config-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-config-operator-5654d7f9fc\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.73.213\",\"__meta_kubernetes_pod_label_app\":\"openshift-config-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5654d7f9fc\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-config-operator-5654d7f9fc-dr2kj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a4e299b6-fee2-4b7f-8411-b2b6980e2cbc\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"config-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-config-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-config-operator/config-operator/0\"},\"labels\":{\"container\":\"openshift-config-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.73.213:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-config-operator\",\"pod\":\"openshift-config-operator-5654d7f9fc-dr2kj\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-config-operator/config-operator/0\",\"scrapeUrl\":\"https://10.128.73.213:8443/metrics\",\"globalUrl\":\"https://10.128.73.213:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.811022777Z\",\"lastScrapeDuration\":0.026739778,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.133.246:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"console-operator-7dbd68dd4b-44sxf\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"console-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-console-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.133.246\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.133.246\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"console-operator-7dbd68dd4b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.133.246\",\"__meta_kubernetes_pod_label_name\":\"console-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7dbd68dd4b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"console-operator-7dbd68dd4b-44sxf\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e9f337bf-a4d7-43c4-b3f1-154403484b7f\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"console-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-console-operator/console-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.133.246:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-console-operator\",\"pod\":\"console-operator-7dbd68dd4b-44sxf\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-console-operator/console-operator/0\",\"scrapeUrl\":\"https://10.128.133.246:8443/metrics\",\"globalUrl\":\"https://10.128.133.246:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.256323704Z\",\"lastScrapeDuration\":0.039847012,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.48.110:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-controller-manager-operator-68c4bd4c8-tgrgc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.48.110\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:16:a6:05\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.48.110\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:16:a6:05\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-controller-manager-operator-68c4bd4c8\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.48.110\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"68c4bd4c8\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-controller-manager-operator-68c4bd4c8-tgrgc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8df194bb-c941-4319-a18b-c2943ee1c557\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-controller-manager-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0\"},\"labels\":{\"container\":\"openshift-controller-manager-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.48.110:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-controller-manager-operator\",\"pod\":\"openshift-controller-manager-operator-68c4bd4c8-tgrgc\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0\",\"scrapeUrl\":\"https://10.128.48.110:8443/metrics\",\"globalUrl\":\"https://10.128.48.110:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.44586628Z\",\"lastScrapeDuration\":0.024415192,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.110.148:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"controller-manager-p9snj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.110.148\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:db:0c:b5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.110.148\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:db:0c:b5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_force\":\"9c8024de-583f-4c3a-98c3-9520f9a74d10\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_force\":\"true\",\"__meta_kubernetes_pod_container_name\":\"controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"controller-manager\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.110.148\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_label_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7664fc7754\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"12\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"controller-manager-p9snj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9576865f-eaac-48f5-9682-a7737ad33b3a\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\"},\"labels\":{\"container\":\"controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.128.110.148:8443\",\"job\":\"controller-manager\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"controller-manager-p9snj\",\"service\":\"controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\",\"scrapeUrl\":\"https://10.128.110.148:8443/metrics\",\"globalUrl\":\"https://10.128.110.148:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.931325552Z\",\"lastScrapeDuration\":0.005279165,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.110.159:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"controller-manager-fq5jx\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.110.159\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:de:ca:d0\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.110.159\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:de:ca:d0\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_force\":\"9c8024de-583f-4c3a-98c3-9520f9a74d10\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_force\":\"true\",\"__meta_kubernetes_pod_container_name\":\"controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"controller-manager\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.110.159\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_label_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7664fc7754\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"12\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"controller-manager-fq5jx\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57c93add-4cd2-4295-b3eb-51de98766ecf\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\"},\"labels\":{\"container\":\"controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.128.110.159:8443\",\"job\":\"controller-manager\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"controller-manager-fq5jx\",\"service\":\"controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\",\"scrapeUrl\":\"https://10.128.110.159:8443/metrics\",\"globalUrl\":\"https://10.128.110.159:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.302061596Z\",\"lastScrapeDuration\":0.018836283,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.111.48:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"controller-manager-2zdvm\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.111.48\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:c9:36\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.111.48\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:c9:36\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_force\":\"9c8024de-583f-4c3a-98c3-9520f9a74d10\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_force\":\"true\",\"__meta_kubernetes_pod_container_name\":\"controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"controller-manager\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.111.48\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_label_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7664fc7754\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"12\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"controller-manager-2zdvm\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"274636e4-c599-4b2c-8b13-1863af739102\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\"},\"labels\":{\"container\":\"controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.128.111.48:8443\",\"job\":\"controller-manager\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"controller-manager-2zdvm\",\"service\":\"controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\",\"scrapeUrl\":\"https://10.128.111.48:8443/metrics\",\"globalUrl\":\"https://10.128.111.48:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.873227554Z\",\"lastScrapeDuration\":0.009550715,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.37.87:9393\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-operator-66f5f8df4f-7v8dq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"dns-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-dns-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.37.87\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a1:98:f6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.37.87\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a1:98:f6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9393\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-operator-66f5f8df4f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.37.87\",\"__meta_kubernetes_pod_label_name\":\"dns-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"66f5f8df4f\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-operator-66f5f8df4f-7v8dq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dda1bc99-60c7-4ad3-a55a-8d1ef8728649\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"dns-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns-operator/dns-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.37.87:9393\",\"job\":\"metrics\",\"namespace\":\"openshift-dns-operator\",\"pod\":\"dns-operator-66f5f8df4f-7v8dq\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-dns-operator/dns-operator/0\",\"scrapeUrl\":\"https://10.128.37.87:9393/metrics\",\"globalUrl\":\"https://10.128.37.87:9393/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.292832122Z\",\"lastScrapeDuration\":0.018784759,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.127.52:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-hpsll\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.52\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.52\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.127.52\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-hpsll\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae463ca1-be02-483f-9849-3e204beb4658\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.127.52:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-hpsll\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.127.52:9154/metrics\",\"globalUrl\":\"https://10.128.127.52:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.367484992Z\",\"lastScrapeDuration\":0.006761172,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.126.114:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.126.114\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"33957bcb-47be-49a6-83ad-300d0d7ffb69\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.126.114:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-wzmlj\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.126.114:9154/metrics\",\"globalUrl\":\"https://10.128.126.114:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.287688307Z\",\"lastScrapeDuration\":0.007791984,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.126.55:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.55\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.55\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.126.55\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f5ce003d-9392-40ac-a34e-8aa47c675f95\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.126.55:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-xb9vg\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.126.55:9154/metrics\",\"globalUrl\":\"https://10.128.126.55:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.961703077Z\",\"lastScrapeDuration\":0.012521505,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.126.73:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-n757c\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.73\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.73\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.126.73\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-n757c\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"22ea4790-c277-42c5-879d-f80c4aaa075d\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.126.73:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-n757c\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.126.73:9154/metrics\",\"globalUrl\":\"https://10.128.126.73:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.417278358Z\",\"lastScrapeDuration\":0.005076544,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.127.108:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-25bww\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.108\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.108\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.127.108\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-25bww\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c0db5e71-94aa-4c0a-b650-7e5e3cb98e3e\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.127.108:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-25bww\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.127.108:9154/metrics\",\"globalUrl\":\"https://10.128.127.108:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.952339361Z\",\"lastScrapeDuration\":0.009716933,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.127.168:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.168\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.168\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.127.168\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"31663356-b33c-43ae-a208-ed3064fcf0ee\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.127.168:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-x6w5l\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.127.168:9154/metrics\",\"globalUrl\":\"https://10.128.127.168:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.954931462Z\",\"lastScrapeDuration\":0.005886826,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.40.74:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-operator-764984fdd-cqns7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"etcd-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-etcd-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.40.74\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:76:d6:be\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.40.74\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:76:d6:be\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"etcd-operator-764984fdd\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.40.74\",\"__meta_kubernetes_pod_label_app\":\"etcd-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"764984fdd\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-operator-764984fdd-cqns7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"24cb69e2-236c-45d1-ba9b-5951cbc0b6e8\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"etcd-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"etcd-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-etcd-operator/etcd-operator/0\"},\"labels\":{\"container\":\"etcd-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.40.74:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-etcd-operator\",\"pod\":\"etcd-operator-764984fdd-cqns7\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-etcd-operator/etcd-operator/0\",\"scrapeUrl\":\"https://10.128.40.74:8443/metrics\",\"globalUrl\":\"https://10.128.40.74:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.651715269Z\",\"lastScrapeDuration\":0.103086036,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.83.151:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"image-registry-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry-operator\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.151\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.151\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-image-registry-operator-6cfc44cd58\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.83.151\",\"__meta_kubernetes_pod_label_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6cfc44cd58\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6f65971b-96c4-4cbd-9b8f-df3a6984fed3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"image-registry-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry-operator/0\"},\"labels\":{\"container\":\"cluster-image-registry-operator\",\"endpoint\":\"60000\",\"instance\":\"10.128.83.151:60000\",\"job\":\"image-registry-operator\",\"namespace\":\"openshift-image-registry\",\"pod\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"service\":\"image-registry-operator\"},\"scrapePool\":\"serviceMonitor/openshift-image-registry/image-registry-operator/0\",\"scrapeUrl\":\"https://10.128.83.151:60000/metrics\",\"globalUrl\":\"https://10.128.83.151:60000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.966396173Z\",\"lastScrapeDuration\":0.003201038,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.83.90:5000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"5000-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_docker_registry\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_imageregistry_operator_openshift_io_dependencies_checksum\":\"sha256:c2e4379a3614d3c6245d6a72b78f2bc288bf39df517d68b7c6dd5439a409036c\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.90\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.90\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_imageregistry_operator_openshift_io_dependencies_checksum\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry\",\"__meta_kubernetes_pod_container_port_number\":\"5000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"image-registry-5dcfbfdb49\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.83.90\",\"__meta_kubernetes_pod_label_docker_registry\":\"default\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5dcfbfdb49\",\"__meta_kubernetes_pod_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b6cdb3a-3f4f-4e5e-8e6c-5dda0d62ec22\",\"__meta_kubernetes_service_annotation_imageregistry_operator_openshift_io_checksum\":\"sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-tls\",\"__meta_kubernetes_service_annotationpresent_imageregistry_operator_openshift_io_checksum\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_docker_registry\":\"default\",\"__meta_kubernetes_service_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry\",\"__metrics_path__\":\"/extensions/v2/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry/0\"},\"labels\":{\"container\":\"registry\",\"endpoint\":\"5000-tcp\",\"instance\":\"10.128.83.90:5000\",\"job\":\"image-registry\",\"namespace\":\"openshift-image-registry\",\"pod\":\"image-registry-5dcfbfdb49-m9mjk\",\"service\":\"image-registry\"},\"scrapePool\":\"serviceMonitor/openshift-image-registry/image-registry/0\",\"scrapeUrl\":\"https://10.128.83.90:5000/extensions/v2/metrics\",\"globalUrl\":\"https://10.128.83.90:5000/extensions/v2/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.514785918Z\",\"lastScrapeDuration\":0.039108771,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.59.173:9393\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"ingress-operator-854bc688f9-lg2hg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"ingress-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-ingress-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.59.173\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d0:47:b1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.59.173\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d0:47:b1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9393\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"ingress-operator-854bc688f9\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.59.173\",\"__meta_kubernetes_pod_label_name\":\"ingress-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"854bc688f9\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"ingress-operator-854bc688f9-lg2hg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"433cff22-73d0-4f33-bd96-649e821932f7\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"ingress-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress-operator/ingress-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.59.173:9393\",\"job\":\"metrics\",\"namespace\":\"openshift-ingress-operator\",\"pod\":\"ingress-operator-854bc688f9-lg2hg\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-ingress-operator/ingress-operator/0\",\"scrapeUrl\":\"https://10.128.59.173:9393/metrics\",\"globalUrl\":\"https://10.128.59.173:9393/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.319097889Z\",\"lastScrapeDuration\":0.02144239,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:1936\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"1936\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7a994a2f-c4ec-4a4c-b4ae-b9ef7f93bb00\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"},\"labels\":{\"container\":\"router\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.199:1936\",\"job\":\"router-internal-default\",\"namespace\":\"openshift-ingress\",\"pod\":\"router-default-697ff75b79-qcfbg\",\"service\":\"router-internal-default\"},\"scrapePool\":\"serviceMonitor/openshift-ingress/router-default/0\",\"scrapeUrl\":\"https://10.196.0.199:1936/metrics\",\"globalUrl\":\"https://10.196.0.199:1936/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.778124155Z\",\"lastScrapeDuration\":0.035368818,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:1936\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"1936\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"74040c8a-de64-4dff-943f-8e9a926a790e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"},\"labels\":{\"container\":\"router\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.169:1936\",\"job\":\"router-internal-default\",\"namespace\":\"openshift-ingress\",\"pod\":\"router-default-697ff75b79-t6b78\",\"service\":\"router-internal-default\"},\"scrapePool\":\"serviceMonitor/openshift-ingress/router-default/0\",\"scrapeUrl\":\"https://10.196.2.169:1936/metrics\",\"globalUrl\":\"https://10.196.2.169:1936/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.944375108Z\",\"lastScrapeDuration\":0.019583431,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.29.145:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"insights-operator-54767897df-vbchm\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"insights-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-insights\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.29.145\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:30:86:ed\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.29.145\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:30:86:ed\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"insights-operator\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"insights-operator-54767897df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.29.145\",\"__meta_kubernetes_pod_label_app\":\"insights-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"54767897df\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"insights-operator-54767897df-vbchm\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c1cc781b-ec36-43c0-be31-e31e50df6f49\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-insights-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"insights-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-insights/insights-operator/0\"},\"labels\":{\"container\":\"insights-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.29.145:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-insights\",\"pod\":\"insights-operator-54767897df-vbchm\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-insights/insights-operator/0\",\"scrapeUrl\":\"https://10.128.29.145:8443/metrics\",\"globalUrl\":\"https://10.128.29.145:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.18982589Z\",\"lastScrapeDuration\":0.031010874,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.87.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-apiserver-operator-7f59b6f8c4-jthtm\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kube-apiserver-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-kube-apiserver-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.87.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:9a:30:26\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.87.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:9a:30:26\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-apiserver-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-apiserver-operator-7f59b6f8c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.87.239\",\"__meta_kubernetes_pod_label_app\":\"kube-apiserver-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7f59b6f8c4\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-apiserver-operator-7f59b6f8c4-jthtm\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f97319cf-40ed-4a80-837a-cb028bc49508\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"kube-apiserver-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"kube-apiserver-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0\"},\"labels\":{\"container\":\"kube-apiserver-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.87.239:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-kube-apiserver-operator\",\"pod\":\"kube-apiserver-operator-7f59b6f8c4-jthtm\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0\",\"scrapeUrl\":\"https://10.128.87.239:8443/metrics\",\"globalUrl\":\"https://10.128.87.239:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.65832312Z\",\"lastScrapeDuration\":0.0214069,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:6443\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubernetes\",\"__meta_kubernetes_namespace\":\"default\",\"__meta_kubernetes_service_label_component\":\"apiserver\",\"__meta_kubernetes_service_label_provider\":\"kubernetes\",\"__meta_kubernetes_service_labelpresent_component\":\"true\",\"__meta_kubernetes_service_labelpresent_provider\":\"true\",\"__meta_kubernetes_service_name\":\"kubernetes\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\"},\"labels\":{\"apiserver\":\"kube-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:6443\",\"job\":\"apiserver\",\"namespace\":\"default\",\"service\":\"kubernetes\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\",\"scrapeUrl\":\"https://10.196.0.105:6443/metrics\",\"globalUrl\":\"https://10.196.0.105:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:27.79305226Z\",\"lastScrapeDuration\":0.244647646,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:6443\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubernetes\",\"__meta_kubernetes_namespace\":\"default\",\"__meta_kubernetes_service_label_component\":\"apiserver\",\"__meta_kubernetes_service_label_provider\":\"kubernetes\",\"__meta_kubernetes_service_labelpresent_component\":\"true\",\"__meta_kubernetes_service_labelpresent_provider\":\"true\",\"__meta_kubernetes_service_name\":\"kubernetes\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\"},\"labels\":{\"apiserver\":\"kube-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:6443\",\"job\":\"apiserver\",\"namespace\":\"default\",\"service\":\"kubernetes\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\",\"scrapeUrl\":\"https://10.196.3.178:6443/metrics\",\"globalUrl\":\"https://10.196.3.178:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.452195688Z\",\"lastScrapeDuration\":0.401500015,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:6443\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubernetes\",\"__meta_kubernetes_namespace\":\"default\",\"__meta_kubernetes_service_label_component\":\"apiserver\",\"__meta_kubernetes_service_label_provider\":\"kubernetes\",\"__meta_kubernetes_service_labelpresent_component\":\"true\",\"__meta_kubernetes_service_labelpresent_provider\":\"true\",\"__meta_kubernetes_service_name\":\"kubernetes\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\"},\"labels\":{\"apiserver\":\"kube-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:6443\",\"job\":\"apiserver\",\"namespace\":\"default\",\"service\":\"kubernetes\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\",\"scrapeUrl\":\"https://10.196.3.187:6443/metrics\",\"globalUrl\":\"https://10.196.3.187:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.687675426Z\",\"lastScrapeDuration\":0.436359264,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.25.14:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-operator-7b9f4f4cdf-4n52n\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kube-controller-manager-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.25.14\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e2:50:e1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.25.14\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e2:50:e1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-controller-manager-operator-7b9f4f4cdf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.25.14\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7b9f4f4cdf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-operator-7b9f4f4cdf-4n52n\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f093eaa7-c949-484c-830a-8e29e64deb7b\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-controller-manager-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"kube-controller-manager-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0\"},\"labels\":{\"container\":\"kube-controller-manager-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.25.14:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-kube-controller-manager-operator\",\"pod\":\"kube-controller-manager-operator-7b9f4f4cdf-4n52n\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0\",\"scrapeUrl\":\"https://10.128.25.14:8443/metrics\",\"globalUrl\":\"https://10.128.25.14:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.114642782Z\",\"lastScrapeDuration\":0.022422876,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10257\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:01.957733716Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"10257\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9fe004e7-c0d0-4b1a-bc98-e115973fe308\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"},\"labels\":{\"container\":\"kube-controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:10257\",\"job\":\"kube-controller-manager\",\"namespace\":\"openshift-kube-controller-manager\",\"pod\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"service\":\"kube-controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\",\"scrapeUrl\":\"https://10.196.0.105:10257/metrics\",\"globalUrl\":\"https://10.196.0.105:10257/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.528167286Z\",\"lastScrapeDuration\":0.519380403,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10257\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:50.144170849Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"10257\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dafaafdf-d6ab-43af-a3b8-182083a9c825\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"},\"labels\":{\"container\":\"kube-controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:10257\",\"job\":\"kube-controller-manager\",\"namespace\":\"openshift-kube-controller-manager\",\"pod\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"service\":\"kube-controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\",\"scrapeUrl\":\"https://10.196.3.178:10257/metrics\",\"globalUrl\":\"https://10.196.3.178:10257/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.775634904Z\",\"lastScrapeDuration\":0.038075955,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10257\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:15.460702568Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"10257\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e98f52-d119-440e-88f0-02ce9237fa4d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"},\"labels\":{\"container\":\"kube-controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:10257\",\"job\":\"kube-controller-manager\",\"namespace\":\"openshift-kube-controller-manager\",\"pod\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"service\":\"kube-controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\",\"scrapeUrl\":\"https://10.196.3.187:10257/metrics\",\"globalUrl\":\"https://10.196.3.187:10257/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.007468609Z\",\"lastScrapeDuration\":0.013537433,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.12.37:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-operator-66c644698-767c2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.12.37\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d0:96:94\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.12.37\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d0:96:94\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-kube-scheduler-operator-66c644698\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.12.37\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"66c644698\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-operator-66c644698-767c2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ffbb21d7-6360-4aa3-9f64-a6c9c169318d\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"kube-scheduler-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.12.37:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-kube-scheduler-operator\",\"pod\":\"openshift-kube-scheduler-operator-66c644698-767c2\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0\",\"scrapeUrl\":\"https://10.128.12.37:8443/metrics\",\"globalUrl\":\"https://10.128.12.37:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:37.604162472Z\",\"lastScrapeDuration\":0.016354755,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:16.955822852Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1bcaee97-1a38-4283-9a2d-41e514e74562\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\",\"scrapeUrl\":\"https://10.196.0.105:10259/metrics\",\"globalUrl\":\"https://10.196.0.105:10259/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.515431376Z\",\"lastScrapeDuration\":0.03260175,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:16.042484581Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1139f840-9de9-4ce6-a949-4acc83331b22\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\",\"scrapeUrl\":\"https://10.196.3.178:10259/metrics\",\"globalUrl\":\"https://10.196.3.178:10259/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:16.769490584Z\",\"lastScrapeDuration\":0.033924075,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:04.640756071Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"356d4529-8a6a-4a65-a827-a2e6bdcefa33\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\",\"scrapeUrl\":\"https://10.196.3.187:10259/metrics\",\"globalUrl\":\"https://10.196.3.187:10259/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.632645378Z\",\"lastScrapeDuration\":0.024249241,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:16.955822852Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1bcaee97-1a38-4283-9a2d-41e514e74562\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics/resources\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\",\"scrapeUrl\":\"https://10.196.0.105:10259/metrics/resources\",\"globalUrl\":\"https://10.196.0.105:10259/metrics/resources\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.46566805Z\",\"lastScrapeDuration\":0.029085268,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:16.042484581Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1139f840-9de9-4ce6-a949-4acc83331b22\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics/resources\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\",\"scrapeUrl\":\"https://10.196.3.178:10259/metrics/resources\",\"globalUrl\":\"https://10.196.3.178:10259/metrics/resources\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:27.419146966Z\",\"lastScrapeDuration\":0.020658887,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:04.640756071Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"356d4529-8a6a-4a65-a827-a2e6bdcefa33\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics/resources\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\",\"scrapeUrl\":\"https://10.196.3.187:10259/metrics/resources\",\"globalUrl\":\"https://10.196.3.187:10259/metrics/resources\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:20.196234933Z\",\"lastScrapeDuration\":0.006441924,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"bbdf1c26-e361-4015-9404-a307c40d0734\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.105:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-cjcgk\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.0.105:9655/metrics\",\"globalUrl\":\"http://10.196.0.105:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.807987556Z\",\"lastScrapeDuration\":0.015268035,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"9a46eb61-8782-4c26-9e89-8fef6e4a33e9\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.199:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-xzbzv\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.0.199:9655/metrics\",\"globalUrl\":\"http://10.196.0.199:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.235107026Z\",\"lastScrapeDuration\":0.004634639,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"de39c947-6203-413a-aa51-b069776af721\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.169:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-crfvc\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.2.169:9655/metrics\",\"globalUrl\":\"http://10.196.2.169:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.747325582Z\",\"lastScrapeDuration\":0.005496352,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e1bace-f2ff-419b-9206-323d49ce67ec\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.72:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-2rrvs\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.2.72:9655/metrics\",\"globalUrl\":\"http://10.196.2.72:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:23.686478061Z\",\"lastScrapeDuration\":0.004933796,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5497497a-dd9f-464c-a031-1af7c8a3123c\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.178:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-ndzt5\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.3.178:9655/metrics\",\"globalUrl\":\"http://10.196.3.178:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.377750451Z\",\"lastScrapeDuration\":0.016292805,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"568d2b5d-b1f3-4810-8ef5-058a27e6266a\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.187:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-t448w\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.3.187:9655/metrics\",\"globalUrl\":\"http://10.196.3.187:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.317261456Z\",\"lastScrapeDuration\":0.055843776,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9654\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-controller\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9654\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-controller-7654df4d98\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7654df4d98\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2543a36c-08af-4a31-9ae6-f0cb7c99a745\",\"__meta_kubernetes_service_label_app\":\"kuryr-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"},\"labels\":{\"container\":\"controller\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.178:9654\",\"job\":\"kuryr-controller\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-controller-7654df4d98-f2qvz\",\"service\":\"kuryr-controller\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\",\"scrapeUrl\":\"http://10.196.3.178:9654/metrics\",\"globalUrl\":\"http://10.196.3.178:9654/metrics\",\"lastError\":\"Get \\\"http://10.196.3.178:9654/metrics\\\": context deadline exceeded\",\"lastScrape\":\"2022-10-13T10:18:24.918549909Z\",\"lastScrapeDuration\":30.00047891,\"health\":\"down\"},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.45.39:9192\",\"job\":\"cluster-autoscaler-operator\",\"namespace\":\"openshift-machine-api\",\"pod\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"service\":\"cluster-autoscaler-operator\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\",\"scrapeUrl\":\"https://10.128.45.39:9192/metrics\",\"globalUrl\":\"https://10.128.45.39:9192/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.891552226Z\",\"lastScrapeDuration\":0.025146096,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-machine-mtrc\",\"endpoint\":\"machine-mtrc\",\"instance\":\"10.128.44.154:8441\",\"job\":\"machine-api-controllers\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"service\":\"machine-api-controllers\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\",\"scrapeUrl\":\"https://10.128.44.154:8441/metrics\",\"globalUrl\":\"https://10.128.44.154:8441/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.194383994Z\",\"lastScrapeDuration\":0.020119044,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"},\"labels\":{\"container\":\"kube-rbac-proxy-machineset-mtrc\",\"endpoint\":\"machineset-mtrc\",\"instance\":\"10.128.44.154:8442\",\"job\":\"machine-api-controllers\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"service\":\"machine-api-controllers\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\",\"scrapeUrl\":\"https://10.128.44.154:8442/metrics\",\"globalUrl\":\"https://10.128.44.154:8442/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.904432709Z\",\"lastScrapeDuration\":0.023808989,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"},\"labels\":{\"container\":\"kube-rbac-proxy-mhc-mtrc\",\"endpoint\":\"mhc-mtrc\",\"instance\":\"10.128.44.154:8444\",\"job\":\"machine-api-controllers\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"service\":\"machine-api-controllers\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\",\"scrapeUrl\":\"https://10.128.44.154:8444/metrics\",\"globalUrl\":\"https://10.128.44.154:8444/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.228431216Z\",\"lastScrapeDuration\":0.015670893,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.44.42:8443\",\"job\":\"machine-api-operator\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-operator-74b9f87587-s6jf2\",\"service\":\"machine-api-operator\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\",\"scrapeUrl\":\"https://10.128.44.42:8443/metrics\",\"globalUrl\":\"https://10.128.44.42:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.552830794Z\",\"lastScrapeDuration\":0.019565875,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-7nbkb\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-7nbkb\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"23bebf09-fce1-46a3-ab7d-9f2c6be459cf\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.187:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-master-2\",\"pod\":\"machine-config-daemon-7nbkb\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.3.187:9001/metrics\",\"globalUrl\":\"https://10.196.3.187:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.627115784Z\",\"lastScrapeDuration\":0.012285439,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-s42r2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-s42r2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ca09e5cb-456f-4900-a4a4-da8699d8ea6d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.105:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-master-0\",\"pod\":\"machine-config-daemon-s42r2\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.0.105:9001/metrics\",\"globalUrl\":\"https://10.196.0.105:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.269681827Z\",\"lastScrapeDuration\":0.013502674,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-twth5\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-twth5\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d445393-db4d-4b75-b45d-05c4248a66e7\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.199:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"pod\":\"machine-config-daemon-twth5\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.0.199:9001/metrics\",\"globalUrl\":\"https://10.196.0.199:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.18878583Z\",\"lastScrapeDuration\":0.004698552,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-hmq85\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-hmq85\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"11ee5a22-7c69-4d1f-a773-71b0d48e28f1\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.169:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"pod\":\"machine-config-daemon-hmq85\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.2.169:9001/metrics\",\"globalUrl\":\"https://10.196.2.169:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.55262142Z\",\"lastScrapeDuration\":0.004336073,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-rrg8p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-rrg8p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b094f84e-7c68-4df8-ab47-e0e40d515b76\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.72:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"pod\":\"machine-config-daemon-rrg8p\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.2.72:9001/metrics\",\"globalUrl\":\"https://10.196.2.72:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.694720207Z\",\"lastScrapeDuration\":0.016292188,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-kc9g6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-kc9g6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"655a3677-59eb-4cd3-811e-ecad4da2edc1\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.178:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-master-1\",\"pod\":\"machine-config-daemon-kc9g6\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.3.178:9001/metrics\",\"globalUrl\":\"https://10.196.3.178:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.84057596Z\",\"lastScrapeDuration\":0.010546749,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:8081\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.128.79.141:8081\",\"job\":\"marketplace-operator-metrics\",\"namespace\":\"openshift-marketplace\",\"pod\":\"marketplace-operator-79fb778f6b-qc8zr\",\"service\":\"marketplace-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\",\"scrapeUrl\":\"https://10.128.79.141:8081/metrics\",\"globalUrl\":\"https://10.128.79.141:8081/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.446273311Z\",\"lastScrapeDuration\":0.006143945,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"},\"labels\":{\"container\":\"alertmanager-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.22.112:9095\",\"job\":\"alertmanager-main\",\"namespace\":\"openshift-monitoring\",\"pod\":\"alertmanager-main-1\",\"service\":\"alertmanager-main\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/alertmanager/0\",\"scrapeUrl\":\"https://10.128.22.112:9095/metrics\",\"globalUrl\":\"https://10.128.22.112:9095/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.422413698Z\",\"lastScrapeDuration\":0.012617696,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"},\"labels\":{\"container\":\"alertmanager-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.138:9095\",\"job\":\"alertmanager-main\",\"namespace\":\"openshift-monitoring\",\"pod\":\"alertmanager-main-2\",\"service\":\"alertmanager-main\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/alertmanager/0\",\"scrapeUrl\":\"https://10.128.23.138:9095/metrics\",\"globalUrl\":\"https://10.128.23.138:9095/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.949991298Z\",\"lastScrapeDuration\":0.024768801,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"},\"labels\":{\"container\":\"alertmanager-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.161:9095\",\"job\":\"alertmanager-main\",\"namespace\":\"openshift-monitoring\",\"pod\":\"alertmanager-main-0\",\"service\":\"alertmanager-main\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/alertmanager/0\",\"scrapeUrl\":\"https://10.128.23.161:9095/metrics\",\"globalUrl\":\"https://10.128.23.161:9095/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.106133479Z\",\"lastScrapeDuration\":0.017946279,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.23.49:8443\",\"job\":\"cluster-monitoring-operator\",\"namespace\":\"openshift-monitoring\",\"pod\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"service\":\"cluster-monitoring-operator\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\",\"scrapeUrl\":\"https://10.128.23.49:8443/metrics\",\"globalUrl\":\"https://10.128.23.49:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:21.672271645Z\",\"lastScrapeDuration\":0.009641354,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9979\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"etcd-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:28:22.756939605Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"742f6dc2-47a0-41cc-b0a9-13e66d83f057\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"},\"labels\":{\"endpoint\":\"etcd-metrics\",\"instance\":\"10.196.0.105:9979\",\"job\":\"etcd\",\"namespace\":\"openshift-etcd\",\"pod\":\"etcd-ostest-n5rnf-master-0\",\"service\":\"etcd\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/etcd/0\",\"scrapeUrl\":\"https://10.196.0.105:9979/metrics\",\"globalUrl\":\"https://10.196.0.105:9979/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.905482827Z\",\"lastScrapeDuration\":0.053485266,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9979\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"etcd-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:56.640481859Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6891d70c-a3ec-4d90-b283-d4abf49382d3\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"},\"labels\":{\"endpoint\":\"etcd-metrics\",\"instance\":\"10.196.3.178:9979\",\"job\":\"etcd\",\"namespace\":\"openshift-etcd\",\"pod\":\"etcd-ostest-n5rnf-master-1\",\"service\":\"etcd\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/etcd/0\",\"scrapeUrl\":\"https://10.196.3.178:9979/metrics\",\"globalUrl\":\"https://10.196.3.178:9979/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.53801222Z\",\"lastScrapeDuration\":0.046140542,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9979\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"etcd-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:36.245067150Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"49572518-4248-4dc2-8392-e8298ad9706c\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"},\"labels\":{\"endpoint\":\"etcd-metrics\",\"instance\":\"10.196.3.187:9979\",\"job\":\"etcd\",\"namespace\":\"openshift-etcd\",\"pod\":\"etcd-ostest-n5rnf-master-2\",\"service\":\"etcd\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/etcd/0\",\"scrapeUrl\":\"https://10.196.3.187:9979/metrics\",\"globalUrl\":\"https://10.196.3.187:9979/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.571197242Z\",\"lastScrapeDuration\":0.073158888,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"},\"labels\":{\"container\":\"grafana-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.22.230:3000\",\"job\":\"grafana\",\"namespace\":\"openshift-monitoring\",\"pod\":\"grafana-7c5c5fb5b6-cht4p\",\"service\":\"grafana\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/grafana/0\",\"scrapeUrl\":\"https://10.128.22.230:3000/metrics\",\"globalUrl\":\"https://10.128.22.230:3000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.211282031Z\",\"lastScrapeDuration\":0.014224974,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-main\",\"endpoint\":\"https-main\",\"instance\":\"10.128.22.45:8443\",\"job\":\"kube-state-metrics\",\"namespace\":\"openshift-monitoring\",\"service\":\"kube-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\",\"scrapeUrl\":\"https://10.128.22.45:8443/metrics\",\"globalUrl\":\"https://10.128.22.45:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.162133437Z\",\"lastScrapeDuration\":0.10426944,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"},\"labels\":{\"container\":\"kube-rbac-proxy-self\",\"endpoint\":\"https-self\",\"instance\":\"10.128.22.45:9443\",\"job\":\"kube-state-metrics\",\"namespace\":\"openshift-monitoring\",\"pod\":\"kube-state-metrics-754df74859-w8k5h\",\"service\":\"kube-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\",\"scrapeUrl\":\"https://10.128.22.45:9443/metrics\",\"globalUrl\":\"https://10.128.22.45:9443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:23.054084271Z\",\"lastScrapeDuration\":0.006612329,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.105:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.0.105:10250/metrics\",\"globalUrl\":\"https://10.196.0.105:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.857908889Z\",\"lastScrapeDuration\":0.098449187,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.178:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.3.178:10250/metrics\",\"globalUrl\":\"https://10.196.3.178:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.364120808Z\",\"lastScrapeDuration\":0.049145813,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.187:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.3.187:10250/metrics\",\"globalUrl\":\"https://10.196.3.187:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.430867893Z\",\"lastScrapeDuration\":0.128554348,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.72:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.2.72:10250/metrics\",\"globalUrl\":\"https://10.196.2.72:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.80061983Z\",\"lastScrapeDuration\":0.064200147,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.169:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.2.169:10250/metrics\",\"globalUrl\":\"https://10.196.2.169:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.435691052Z\",\"lastScrapeDuration\":6.756325893,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.199:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.0.199:10250/metrics\",\"globalUrl\":\"https://10.196.0.199:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.017521429Z\",\"lastScrapeDuration\":0.15388102,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.105:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.0.105:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.0.105:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.882904975Z\",\"lastScrapeDuration\":1.088934562,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.178:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.3.178:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.3.178:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:07.513564555Z\",\"lastScrapeDuration\":1.739874066,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.187:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.3.187:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.3.187:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.434396679Z\",\"lastScrapeDuration\":1.785947507,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.72:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.2.72:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.2.72:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.898295528Z\",\"lastScrapeDuration\":0.463552284,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.169:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.2.169:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.2.169:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.553753786Z\",\"lastScrapeDuration\":0.536215622,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.199:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.0.199:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.0.199:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.197167099Z\",\"lastScrapeDuration\":0.515055528,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.105:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.0.105:10250/metrics/probes\",\"globalUrl\":\"https://10.196.0.105:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.57273602Z\",\"lastScrapeDuration\":0.002274152,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.178:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.3.178:10250/metrics/probes\",\"globalUrl\":\"https://10.196.3.178:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.38781671Z\",\"lastScrapeDuration\":0.003897559,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.187:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.3.187:10250/metrics/probes\",\"globalUrl\":\"https://10.196.3.187:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.911603476Z\",\"lastScrapeDuration\":0.002449197,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.72:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.2.72:10250/metrics/probes\",\"globalUrl\":\"https://10.196.2.72:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.708813915Z\",\"lastScrapeDuration\":0.00350919,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.169:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.2.169:10250/metrics/probes\",\"globalUrl\":\"https://10.196.2.169:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.680926571Z\",\"lastScrapeDuration\":0.002269746,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.199:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.0.199:10250/metrics/probes\",\"globalUrl\":\"https://10.196.0.199:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.488814277Z\",\"lastScrapeDuration\":0.002607716,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.0.105:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.0.105:9537/metrics\",\"globalUrl\":\"http://10.196.0.105:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.212484511Z\",\"lastScrapeDuration\":0.006502641,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.3.178:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.3.178:9537/metrics\",\"globalUrl\":\"http://10.196.3.178:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.88625022Z\",\"lastScrapeDuration\":0.005996596,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.3.187:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.3.187:9537/metrics\",\"globalUrl\":\"http://10.196.3.187:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:07.861087393Z\",\"lastScrapeDuration\":0.006478659,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.2.72:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.2.72:9537/metrics\",\"globalUrl\":\"http://10.196.2.72:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.129390454Z\",\"lastScrapeDuration\":0.007325069,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.2.169:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.2.169:9537/metrics\",\"globalUrl\":\"http://10.196.2.169:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.845647523Z\",\"lastScrapeDuration\":0.006217764,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.0.199:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.0.199:9537/metrics\",\"globalUrl\":\"http://10.196.0.199:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.695366205Z\",\"lastScrapeDuration\":0.00543279,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-master-0\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-p5vmg\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.0.105:9100/metrics\",\"globalUrl\":\"https://10.196.0.105:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.169027835Z\",\"lastScrapeDuration\":0.107931286,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-worker-0-j4pkp\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-7cn6l\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.0.199:9100/metrics\",\"globalUrl\":\"https://10.196.0.199:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.820630372Z\",\"lastScrapeDuration\":0.02903944,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-worker-0-94fxs\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-fvjvs\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.2.169:9100/metrics\",\"globalUrl\":\"https://10.196.2.169:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.884564822Z\",\"lastScrapeDuration\":0.028085904,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-worker-0-8kq82\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-7n85z\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.2.72:9100/metrics\",\"globalUrl\":\"https://10.196.2.72:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.691928253Z\",\"lastScrapeDuration\":0.023609318,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-master-1\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-dlzvz\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.3.178:9100/metrics\",\"globalUrl\":\"https://10.196.3.178:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.877484535Z\",\"lastScrapeDuration\":0.064567261,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-master-2\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-g96tz\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.3.187:9100/metrics\",\"globalUrl\":\"https://10.196.3.187:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.169320021Z\",\"lastScrapeDuration\":0.129569916,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-main\",\"endpoint\":\"https-main\",\"instance\":\"10.128.22.89:8443\",\"job\":\"openshift-state-metrics\",\"namespace\":\"openshift-monitoring\",\"pod\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"service\":\"openshift-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\",\"scrapeUrl\":\"https://10.128.22.89:8443/metrics\",\"globalUrl\":\"https://10.128.22.89:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:18:08.638710192Z\",\"lastScrapeDuration\":0.004114451,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"},\"labels\":{\"container\":\"kube-rbac-proxy-self\",\"endpoint\":\"https-self\",\"instance\":\"10.128.22.89:9443\",\"job\":\"openshift-state-metrics\",\"namespace\":\"openshift-monitoring\",\"pod\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"service\":\"openshift-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\",\"scrapeUrl\":\"https://10.128.22.89:9443/metrics\",\"globalUrl\":\"https://10.128.22.89:9443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:27.707756506Z\",\"lastScrapeDuration\":0.004276215,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"},\"labels\":{\"container\":\"prometheus-adapter\",\"endpoint\":\"https\",\"instance\":\"10.128.23.77:6443\",\"job\":\"prometheus-adapter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-adapter-86cfd468f7-blrxn\",\"service\":\"prometheus-adapter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\",\"scrapeUrl\":\"https://10.128.23.77:6443/metrics\",\"globalUrl\":\"https://10.128.23.77:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:16.892674609Z\",\"lastScrapeDuration\":0.018140589,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"},\"labels\":{\"container\":\"prometheus-adapter\",\"endpoint\":\"https\",\"instance\":\"10.128.23.82:6443\",\"job\":\"prometheus-adapter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"service\":\"prometheus-adapter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\",\"scrapeUrl\":\"https://10.128.23.82:6443/metrics\",\"globalUrl\":\"https://10.128.23.82:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.106615906Z\",\"lastScrapeDuration\":0.018610834,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"},\"labels\":{\"container\":\"prometheus-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.18:9091\",\"job\":\"prometheus-k8s\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-0\",\"service\":\"prometheus-k8s\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\",\"scrapeUrl\":\"https://10.128.23.18:9091/metrics\",\"globalUrl\":\"https://10.128.23.18:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.006329147Z\",\"lastScrapeDuration\":0.035074093,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"},\"labels\":{\"container\":\"prometheus-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.35:9091\",\"job\":\"prometheus-k8s\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-1\",\"service\":\"prometheus-k8s\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\",\"scrapeUrl\":\"https://10.128.23.35:9091/metrics\",\"globalUrl\":\"https://10.128.23.35:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.554808033Z\",\"lastScrapeDuration\":0.032665281,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.22.177:8443\",\"job\":\"prometheus-operator\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"service\":\"prometheus-operator\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\",\"scrapeUrl\":\"https://10.128.22.177:8443/metrics\",\"globalUrl\":\"https://10.128.22.177:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.019364187Z\",\"lastScrapeDuration\":0.013974122,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.22.239:8443\",\"job\":\"telemeter-client\",\"namespace\":\"openshift-monitoring\",\"pod\":\"telemeter-client-6d8969b4bf-dffrt\",\"service\":\"telemeter-client\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\",\"scrapeUrl\":\"https://10.128.22.239:8443/metrics\",\"globalUrl\":\"https://10.128.22.239:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:15.31457454Z\",\"lastScrapeDuration\":0.004959401,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.114:9091\",\"job\":\"thanos-querier\",\"namespace\":\"openshift-monitoring\",\"pod\":\"thanos-querier-6699db6d95-cvbzq\",\"service\":\"thanos-querier\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\",\"scrapeUrl\":\"https://10.128.23.114:9091/metrics\",\"globalUrl\":\"https://10.128.23.114:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.006530509Z\",\"lastScrapeDuration\":0.011510362,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.183:9091\",\"job\":\"thanos-querier\",\"namespace\":\"openshift-monitoring\",\"pod\":\"thanos-querier-6699db6d95-42mpw\",\"service\":\"thanos-querier\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\",\"scrapeUrl\":\"https://10.128.23.183:9091/metrics\",\"globalUrl\":\"https://10.128.23.183:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.296367017Z\",\"lastScrapeDuration\":0.02241396,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-thanos\",\"endpoint\":\"thanos-proxy\",\"instance\":\"10.128.23.35:10902\",\"job\":\"prometheus-k8s-thanos-sidecar\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-1\",\"service\":\"prometheus-k8s-thanos-sidecar\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\",\"scrapeUrl\":\"https://10.128.23.35:10902/metrics\",\"globalUrl\":\"https://10.128.23.35:10902/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.988989234Z\",\"lastScrapeDuration\":0.007803824,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-thanos\",\"endpoint\":\"thanos-proxy\",\"instance\":\"10.128.23.18:10902\",\"job\":\"prometheus-k8s-thanos-sidecar\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-0\",\"service\":\"prometheus-k8s-thanos-sidecar\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\",\"scrapeUrl\":\"https://10.128.23.18:10902/metrics\",\"globalUrl\":\"https://10.128.23.18:10902/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.866499737Z\",\"lastScrapeDuration\":0.006439213,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.19:8443\",\"job\":\"multus-admission-controller\",\"namespace\":\"openshift-multus\",\"pod\":\"multus-admission-controller-flt6k\",\"service\":\"multus-admission-controller\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\",\"scrapeUrl\":\"https://10.128.34.19:8443/metrics\",\"globalUrl\":\"https://10.128.34.19:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.76701535Z\",\"lastScrapeDuration\":0.00939373,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.23:8443\",\"job\":\"multus-admission-controller\",\"namespace\":\"openshift-multus\",\"pod\":\"multus-admission-controller-xj8rp\",\"service\":\"multus-admission-controller\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\",\"scrapeUrl\":\"https://10.128.34.23:8443/metrics\",\"globalUrl\":\"https://10.128.34.23:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.884037882Z\",\"lastScrapeDuration\":0.009206721,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.59:8443\",\"job\":\"multus-admission-controller\",\"namespace\":\"openshift-multus\",\"pod\":\"multus-admission-controller-pprg6\",\"service\":\"multus-admission-controller\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\",\"scrapeUrl\":\"https://10.128.34.59:8443/metrics\",\"globalUrl\":\"https://10.128.34.59:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.649242685Z\",\"lastScrapeDuration\":0.003407487,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.62:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.62\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.62\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.62\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b9e25138-56b7-4086-b0d8-bbfad8d59d29\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.62:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-98jr8\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.62:8443/metrics\",\"globalUrl\":\"https://10.128.34.62:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.029317843Z\",\"lastScrapeDuration\":0.011697606,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.92:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.92\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.92\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.92\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"78e54083-207a-4a1d-9ac3-1e61e4c3a94d\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.92:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-xh8kk\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.92:8443/metrics\",\"globalUrl\":\"https://10.128.34.92:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.19888255Z\",\"lastScrapeDuration\":0.004268387,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.35.157:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.157\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.157\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.35.157\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"eab7a941-acc9-4f7a-9e27-bfda6efdc8b7\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.35.157:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-9vnl8\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.35.157:8443/metrics\",\"globalUrl\":\"https://10.128.35.157:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.526186331Z\",\"lastScrapeDuration\":0.002245776,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.35.46:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.46\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.46\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.35.46\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f1a5dd1f-c96d-435e-a2c2-414ef30007b0\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.35.46:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-6p764\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.35.46:8443/metrics\",\"globalUrl\":\"https://10.128.35.46:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.671457298Z\",\"lastScrapeDuration\":0.002166266,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.135:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.135\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.135\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.135\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3e837b28-47f3-449c-a549-2f35716eadac\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.135:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-mmmtp\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.135:8443/metrics\",\"globalUrl\":\"https://10.128.34.135:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.885591077Z\",\"lastScrapeDuration\":0.00666558,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.247:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.247\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.247\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.34.247\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5cc84773-7d05-45e6-9e0e-c1d785d19d6f\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.247:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-rwwwz\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.247:8443/metrics\",\"globalUrl\":\"https://10.128.34.247:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.583768924Z\",\"lastScrapeDuration\":0.004258343,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.103.204:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-source-84dfc9ddb-46tsr\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"network-check-source\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-source\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.204\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5f:a0:61\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.204\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5f:a0:61\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-source-84dfc9ddb\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.103.204\",\"__meta_kubernetes_pod_label_app\":\"network-check-source\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"84dfc9ddb\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-source-84dfc9ddb-46tsr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"750fdda1-ded7-4131-9bd7-f42602a669d4\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_label_app\":\"network-check-source\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"network-check-source\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"},\"labels\":{\"container\":\"check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.103.204:17698\",\"job\":\"network-check-source\",\"namespace\":\"openshift-network-diagnostics\",\"pod\":\"network-check-source-84dfc9ddb-46tsr\",\"service\":\"network-check-source\"},\"scrapePool\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\",\"scrapeUrl\":\"https://10.128.103.204:17698/metrics\",\"globalUrl\":\"https://10.128.103.204:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.779419396Z\",\"lastScrapeDuration\":0.013312408,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.93.117:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"catalog-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"catalog-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.117\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.117\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"catalog-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"catalog-operator-7c7d96d8d6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.117\",\"__meta_kubernetes_pod_label_app\":\"catalog-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c7d96d8d6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"245bde86-6823-4aaf-9b27-aaad0428d6f6\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"catalog-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"catalog-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"catalog-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"},\"labels\":{\"container\":\"catalog-operator\",\"endpoint\":\"https-metrics\",\"instance\":\"10.128.93.117:8443\",\"job\":\"catalog-operator-metrics\",\"namespace\":\"openshift-operator-lifecycle-manager\",\"pod\":\"catalog-operator-7c7d96d8d6-bfvts\",\"service\":\"catalog-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\",\"scrapeUrl\":\"https://10.128.93.117:8443/metrics\",\"globalUrl\":\"https://10.128.93.117:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.31223398Z\",\"lastScrapeDuration\":0.006748421,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.92.123:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"olm-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"olm-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.92.123\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.92.123\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"olm-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"olm-operator-56f75d4687\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.92.123\",\"__meta_kubernetes_pod_label_app\":\"olm-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"56f75d4687\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90bf0bdc-6d48-4eb2-bc10-49acdc5bc676\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"olm-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"olm-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"olm-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"},\"labels\":{\"container\":\"olm-operator\",\"endpoint\":\"https-metrics\",\"instance\":\"10.128.92.123:8443\",\"job\":\"olm-operator-metrics\",\"namespace\":\"openshift-operator-lifecycle-manager\",\"pod\":\"olm-operator-56f75d4687-pdzb6\",\"service\":\"olm-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\",\"scrapeUrl\":\"https://10.128.92.123:8443/metrics\",\"globalUrl\":\"https://10.128.92.123:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:15.389777292Z\",\"lastScrapeDuration\":0.004625075,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.56.252:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"service-ca-operator-6d88c88495-pzm78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"service-ca-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-service-ca-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.56.252\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c7:ec:8e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.56.252\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c7:ec:8e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"service-ca-operator-6d88c88495\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.56.252\",\"__meta_kubernetes_pod_label_app\":\"service-ca-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d88c88495\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"service-ca-operator-6d88c88495-pzm78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"870f3d5b-b205-4ac6-9b28-042e2d7859b1\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"service-ca-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-service-ca-operator/service-ca-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.56.252:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-service-ca-operator\",\"pod\":\"service-ca-operator-6d88c88495-pzm78\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-service-ca-operator/service-ca-operator/0\",\"scrapeUrl\":\"https://10.128.56.252:8443/metrics\",\"globalUrl\":\"https://10.128.56.252:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:22.524570882Z\",\"lastScrapeDuration\":0.028526831,\"health\":\"up\"}],\"droppedTargets\":[{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.187\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.120.232\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.121.9\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.52.143:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"csi-snapshot-webhook-7b969bc879-j7bqg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"csi-snapshot-webhook\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.52.143\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:4c:d4:b3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.52.143\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:4c:d4:b3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"webhook\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"csi-snapshot-webhook-7b969bc879\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.52.143\",\"__meta_kubernetes_pod_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7b969bc879\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"csi-snapshot-webhook-7b969bc879-j7bqg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f72f577-7838-4bdc-a7d7-809d2c435ee8\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"csi-snapshot-webhook-secret\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"csi-snapshot-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.52.66:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"csi-snapshot-webhook-7b969bc879-tzkvg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"csi-snapshot-webhook\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.52.66\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:89:98:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.52.66\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:89:98:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"webhook\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"csi-snapshot-webhook-7b969bc879\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.52.66\",\"__meta_kubernetes_pod_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7b969bc879\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"csi-snapshot-webhook-7b969bc879-tzkvg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"bc78ca4d-597c-403c-8377-9e25ec01a959\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"csi-snapshot-webhook-secret\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"csi-snapshot-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.53.147:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"csi-snapshot-controller-operator-547fc5c4f-f6m26\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"csi-snapshot-controller-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"csi-snapshot-controller-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.53.147\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:60:80:5f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.53.147\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:60:80:5f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"csi-snapshot-controller-operator-547fc5c4f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.53.147\",\"__meta_kubernetes_pod_label_app\":\"csi-snapshot-controller-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"547fc5c4f\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"csi-snapshot-controller-operator-547fc5c4f-f6m26\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"25bbc5e7-e57b-4530-96a1-13d9a30fb5f2\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"csi-snapshot-controller-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"csi-snapshot-controller-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.133.246:60000\",\"__meta_kubernetes_endpoints_label_name\":\"console-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-console-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.133.246\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.133.246\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"console-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"console-operator-7dbd68dd4b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.133.246\",\"__meta_kubernetes_pod_label_name\":\"console-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7dbd68dd4b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"console-operator-7dbd68dd4b-44sxf\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e9f337bf-a4d7-43c4-b3f1-154403484b7f\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"console-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-console-operator/console-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.114:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.126.114\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"33957bcb-47be-49a6-83ad-300d0d7ffb69\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.55:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.55\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.55\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.126.55\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f5ce003d-9392-40ac-a34e-8aa47c675f95\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.73:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-n757c\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.73\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.73\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.126.73\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-n757c\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"22ea4790-c277-42c5-879d-f80c4aaa075d\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.108:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-25bww\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.108\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.108\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.127.108\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-25bww\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c0db5e71-94aa-4c0a-b650-7e5e3cb98e3e\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.168:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.168\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.168\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.127.168\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"31663356-b33c-43ae-a208-ed3064fcf0ee\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.52:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-hpsll\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.52\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.52\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.127.52\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-hpsll\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae463ca1-be02-483f-9849-3e204beb4658\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.114:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.126.114\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"33957bcb-47be-49a6-83ad-300d0d7ffb69\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.55:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.55\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.55\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.126.55\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f5ce003d-9392-40ac-a34e-8aa47c675f95\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.73:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-n757c\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.73\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.126.73\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.126.73\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-n757c\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"22ea4790-c277-42c5-879d-f80c4aaa075d\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.108:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-25bww\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.108\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.108\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.127.108\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-25bww\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c0db5e71-94aa-4c0a-b650-7e5e3cb98e3e\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.168:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.168\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.168\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.127.168\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"31663356-b33c-43ae-a208-ed3064fcf0ee\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.52:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-hpsll\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.52\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.127.52\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.127.52\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-hpsll\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae463ca1-be02-483f-9849-3e204beb4658\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.83.90:5000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"5000-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_docker_registry\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_imageregistry_operator_openshift_io_dependencies_checksum\":\"sha256:c2e4379a3614d3c6245d6a72b78f2bc288bf39df517d68b7c6dd5439a409036c\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.90\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.90\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_imageregistry_operator_openshift_io_dependencies_checksum\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry\",\"__meta_kubernetes_pod_container_port_number\":\"5000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"image-registry-5dcfbfdb49\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.83.90\",\"__meta_kubernetes_pod_label_docker_registry\":\"default\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5dcfbfdb49\",\"__meta_kubernetes_pod_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b6cdb3a-3f4f-4e5e-8e6c-5dda0d62ec22\",\"__meta_kubernetes_service_annotation_imageregistry_operator_openshift_io_checksum\":\"sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-tls\",\"__meta_kubernetes_service_annotationpresent_imageregistry_operator_openshift_io_checksum\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_docker_registry\":\"default\",\"__meta_kubernetes_service_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.83.151:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"image-registry-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry-operator\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.151\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.83.151\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-image-registry-operator-6cfc44cd58\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.83.151\",\"__meta_kubernetes_pod_label_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6cfc44cd58\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6f65971b-96c4-4cbd-9b8f-df3a6984fed3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"image-registry-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry-operator\",\"__metrics_path__\":\"/extensions/v2/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7a994a2f-c4ec-4a4c-b4ae-b9ef7f93bb00\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"74040c8a-de64-4dff-943f-8e9a926a790e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:80\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"80\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7a994a2f-c4ec-4a4c-b4ae-b9ef7f93bb00\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:80\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"80\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"74040c8a-de64-4dff-943f-8e9a926a790e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10357\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:01.957733716Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-policy-controller\",\"__meta_kubernetes_pod_container_port_number\":\"10357\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9fe004e7-c0d0-4b1a-bc98-e115973fe308\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10357\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:50.144170849Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-policy-controller\",\"__meta_kubernetes_pod_container_port_number\":\"10357\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dafaafdf-d6ab-43af-a3b8-182083a9c825\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10357\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:15.460702568Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-policy-controller\",\"__meta_kubernetes_pod_container_port_number\":\"10357\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e98f52-d119-440e-88f0-02ce9237fa4d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9654\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-controller\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9654\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-controller-7654df4d98\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7654df4d98\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2543a36c-08af-4a31-9ae6-f0cb7c99a745\",\"__meta_kubernetes_service_label_app\":\"kuryr-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"bbdf1c26-e361-4015-9404-a307c40d0734\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"9a46eb61-8782-4c26-9e89-8fef6e4a33e9\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"de39c947-6203-413a-aa51-b069776af721\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e1bace-f2ff-419b-9206-323d49ce67ec\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5497497a-dd9f-464c-a031-1af7c8a3123c\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"568d2b5d-b1f3-4810-8ef5-058a27e6266a\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.42\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.45.39\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.44.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.100:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"community-operators-6xhq7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"79986496d9\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"community-operators\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.100\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:95:36:6d\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.100\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:95:36:6d\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.79.100\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"community-operators\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"584cc5d5c6\",\"__meta_kubernetes_pod_labelpresent_catalogsource_operators_coreos_com_update\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"community-operators-6xhq7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1d5463c2-ae3f-4ae2-b8c2-461fcf8304f6\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"79986496d9\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"community-operators\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.113:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"redhat-operators-7vq7x\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"f6ff9c676\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"redhat-operators\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.113\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:46:75:5f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.113\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:46:75:5f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.79.113\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"redhat-operators\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"7745cfd586\",\"__meta_kubernetes_pod_labelpresent_catalogsource_operators_coreos_com_update\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"redhat-operators-7vq7x\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"489c18ef-1d31-4d13-8856-0137e3d5ee19\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"f6ff9c676\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"redhat-operators\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.88:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"certified-operators-g5v7x\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"676574974f\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"certified-operators\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.88\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:cc:69:e1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.88\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:cc:69:e1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.79.88\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"certified-operators\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"78dcddd844\",\"__meta_kubernetes_pod_labelpresent_catalogsource_operators_coreos_com_update\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"certified-operators-g5v7x\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fcb29ab8-aa9d-4fd8-b085-ce0098072c59\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"676574974f\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"certified-operators\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:8383\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:60000\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:8080\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.79.141\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.78.179:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"redhat-marketplace-hhmpc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"fc99d9bdb\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"redhat-marketplace\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.78.179\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:27:89:d2\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.78.179\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:27:89:d2\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.78.179\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"redhat-marketplace\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"fbf4dd465\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"redhat-marketplace-hhmpc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3f0ff733-7469-4eb7-9a01-55c45eca0afe\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"fc99d9bdb\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"redhat-marketplace\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:2379\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"etcd\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:28:22.756939605Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"742f6dc2-47a0-41cc-b0a9-13e66d83f057\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:2379\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"etcd\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:56.640481859Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6891d70c-a3ec-4d90-b283-d4abf49382d3\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:2379\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"etcd\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:36.245067150Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"49572518-4248-4dc2-8392-e8298ad9706c\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9980\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:28:22.756939605Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-readyz\",\"__meta_kubernetes_pod_container_port_name\":\"readyz\",\"__meta_kubernetes_pod_container_port_number\":\"9980\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"742f6dc2-47a0-41cc-b0a9-13e66d83f057\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9980\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:56.640481859Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-readyz\",\"__meta_kubernetes_pod_container_port_name\":\"readyz\",\"__meta_kubernetes_pod_container_port_number\":\"9980\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6891d70c-a3ec-4d90-b283-d4abf49382d3\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9980\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:36.245067150Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-readyz\",\"__meta_kubernetes_pod_container_port_name\":\"readyz\",\"__meta_kubernetes_pod_container_port_number\":\"9980\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"49572518-4248-4dc2-8392-e8298ad9706c\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.114\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.183\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.89\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.239\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.49\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.177\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.112\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.138\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.161\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.77\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.82\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.18\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.23.35\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.22.230\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.135:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.135\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.135\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.135\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3e837b28-47f3-449c-a549-2f35716eadac\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.247:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.247\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.247\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.34.247\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5cc84773-7d05-45e6-9e0e-c1d785d19d6f\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.62:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.62\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.62\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.62\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b9e25138-56b7-4086-b0d8-bbfad8d59d29\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.92:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.92\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.92\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.92\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"78e54083-207a-4a1d-9ac3-1e61e4c3a94d\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.35.157:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.157\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.157\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.35.157\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"eab7a941-acc9-4f7a-9e27-bfda6efdc8b7\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.35.46:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.46\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.35.46\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.35.46\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f1a5dd1f-c96d-435e-a2c2-414ef30007b0\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.19\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.34.59\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.102.146:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-59lq9\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.102.146\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2e:58:58\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.102.146\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:2e:58:58\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.102.146\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-59lq9\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"d1a98bea-e210-44c4-a570-c9b3e3b0c15b\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.102.87:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-b6qcb\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.102.87\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b8:a1:d9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.102.87\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:b8:a1:d9\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.102.87\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-b6qcb\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a0d01f62-8fc4-461d-9bb0-508100b31c66\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.135:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-8pbt4\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.135\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:af:35:5d\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.135\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:af:35:5d\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.103.135\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-8pbt4\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dfe74f2a-da84-4b8a-b5ae-85624567baca\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.154:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-x7ncv\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:01:ae:14\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.154\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:01:ae:14\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.103.154\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-x7ncv\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2b7a96a9-c1a8-4940-adaa-942043648bad\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.215:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-k2dkh\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.215\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fc:d5:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.215\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:fc:d5:0a\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.103.215\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-k2dkh\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9be059a1-72fb-40df-a638-65738e955f58\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.253:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-675xj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.253\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:3e:12:ac\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.103.253\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:3e:12:ac\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.103.253\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-675xj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8936e92f-a7cf-4889-95a8-6c5a667d658b\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.118.209:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-794b9fc494-m9zm9\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.118.209\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:e2:d2\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.118.209\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:5b:e2:d2\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-794b9fc494\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.118.209\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_label_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"794b9fc494\",\"__meta_kubernetes_pod_label_revision\":\"1\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-794b9fc494-m9zm9\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3aa58f46-924e-4cd2-9aea-09be52dd9703\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-oauth-apiserver/openshift-oauth-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.119.144:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-794b9fc494-bwqm7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.119.144\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:8d:f7:ff\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.119.144\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:8d:f7:ff\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-794b9fc494\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.119.144\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_label_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"794b9fc494\",\"__meta_kubernetes_pod_label_revision\":\"1\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-794b9fc494-bwqm7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"eee0ef71-5e00-42cc-9f3f-5751f435891d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-oauth-apiserver/openshift-oauth-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.119.66:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-794b9fc494-mh5mh\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.119.66\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a5:82:c1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.119.66\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:a5:82:c1\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-794b9fc494\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.119.66\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_label_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"794b9fc494\",\"__meta_kubernetes_pod_label_revision\":\"1\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-794b9fc494-mh5mh\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae380f69-28f7-4135-a239-268c9862de08\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-oauth-apiserver/openshift-oauth-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.45:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.93.45\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f503b711-ed84-447c-ae2d-d9f748184e79\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.91:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.91\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.91\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.91\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"d594741c-595c-4b03-861d-b7f1ea727aeb\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.92.123:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"olm-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"olm-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.92.123\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.92.123\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"olm-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"olm-operator-56f75d4687\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.92.123\",\"__meta_kubernetes_pod_label_app\":\"olm-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"56f75d4687\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90bf0bdc-6d48-4eb2-bc10-49acdc5bc676\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"olm-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"olm-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"olm-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.117:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"catalog-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"catalog-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.117\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.117\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"catalog-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"catalog-operator-7c7d96d8d6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.117\",\"__meta_kubernetes_pod_label_app\":\"catalog-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c7d96d8d6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"245bde86-6823-4aaf-9b27-aaad0428d6f6\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"catalog-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"catalog-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"catalog-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.45:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.45\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.93.45\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f503b711-ed84-447c-ae2d-d9f748184e79\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.91:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.91\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n    \\\"name\\\": \\\"kuryr\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.128.93.91\\\"\\n    ],\\n    \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.91\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"d594741c-595c-4b03-861d-b7f1ea727aeb\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"}}]}}"
STEP: verifying all expected jobs have a working target
STEP: verifying standard metrics keys
STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1
Oct 13 10:19:38.425: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:19:38.894: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:19:38.894: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1
Oct 13 10:19:38.894: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:19:39.294: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:19:39.298: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1
Oct 13 10:19:49.303: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:19:49.740: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:19:49.740: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1
Oct 13 10:19:49.740: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:19:50.460: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:19:50.461: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1
Oct 13 10:20:00.462: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:20:00.911: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:20:00.911: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1
Oct 13 10:20:00.911: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:20:01.315: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:20:01.316: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1
Oct 13 10:20:11.317: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:20:11.798: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:20:11.798: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1
Oct 13 10:20:11.799: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:20:12.190: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:20:12.190: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1
Oct 13 10:20:22.191: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:20:22.593: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:20:22.593: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1
Oct 13 10:20:22.593: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"'
Oct 13 10:20:23.044: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n"
Oct 13 10:20:23.044: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-z4ls2".
STEP: Found 6 events.
Oct 13 10:20:33.105: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-z4ls2/execpod to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_execpod_e2e-test-prometheus-z4ls2_06053816-a003-4d3a-a95d-f28fa95a0364_0(48fa4e40a85f79a1c4f39408ae14f4dedd374b491f0b4d0ec1c9f7d14cd6b18f): error adding pod e2e-test-prometheus-z4ls2_execpod to CNI network "multus-cni-network": [e2e-test-prometheus-z4ls2/execpod/06053816-a003-4d3a-a95d-f28fa95a0364:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?; Post "http://localhost:5036/addNetwork": EOF
Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.152.19/23] from kuryr
Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine
Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 10:20:33.111: INFO: POD      NODE                         PHASE    GRACE  CONDITIONS
Oct 13 10:20:33.111: INFO: execpod  ostest-n5rnf-worker-0-j4pkp  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:43 +0000 UTC  }]
Oct 13 10:20:33.111: INFO: 
Oct 13 10:20:33.122: INFO: skipping dumping cluster info - cluster too large
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-z4ls2" for this suite.
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:571]: Unexpected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "promQL query returned unexpected results:\ntemplate_router_reload_seconds_count{job=\"router-internal-default\"} >= 1\n[]",
        },
        {
            s: "promQL query returned unexpected results:\nhaproxy_server_up{job=\"router-internal-default\"} >= 1\n[]",
        },
    ]
    [promQL query returned unexpected results:
    template_router_reload_seconds_count{job="router-internal-default"} >= 1
    [], promQL query returned unexpected results:
    haproxy_server_up{job="router-internal-default"} >= 1
    []]
occurred

Stderr
_sig-cli__oc_explain_should_contain_spec+status_for_builtinTypes__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 15.8s

_sig-cli__oc_adm_must-gather_runs_successfully_for_audit_logs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 204.0s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_does_not_isolate_namespaces_by_default_should_allow_communication_between_pods_in_different_namespaces_on_different_nodes__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 71.0s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_isolates_namespaces_by_default_should_allow_communication_from_default_to_non-default_namespace_on_the_same_node__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.4s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:17:48.959: INFO: configPath is now "/tmp/configfile3520642642"
Oct 13 10:17:48.959: INFO: The user is now "e2e-test-ns-global-lwdqb-user"
Oct 13 10:17:48.959: INFO: Creating project "e2e-test-ns-global-lwdqb"
Oct 13 10:17:49.297: INFO: Waiting on permissions in project "e2e-test-ns-global-lwdqb" ...
Oct 13 10:17:49.305: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:17:49.500: INFO: Waiting for service account "default" secrets (default-token-wftgl) to include dockercfg/token ...
Oct 13 10:17:49.615: INFO: Waiting for service account "default" secrets (default-token-wftgl) to include dockercfg/token ...
Oct 13 10:17:49.712: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:17:49.842: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:17:49.965: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:17:49.977: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:17:50.081: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:17:50.908: INFO: Project "e2e-test-ns-global-lwdqb" has been fully provisioned.
[BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  github.com/openshift/origin/test/extended/networking/util.go:350
Oct 13 10:17:51.263: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:17:51.263: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:17:51.334: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-ns-global-lwdqb-user}, err: <nil>
Oct 13 10:17:51.471: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-ns-global-lwdqb}, err: <nil>
Oct 13 10:17:51.630: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~LgbhHjQEB7Cs_1IOwk15Sbfp2IoL2tYEqYg_3FmGvqg}, err: <nil>
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-ns-global-lwdqb" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.

Stderr
_sig-builds__Feature_Builds__s2i_build_with_a_root_user_image_should_create_a_root_build_and_fail_without_a_privileged_SCC__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

Skipped: skip [github.com/openshift/origin/test/extended/builds/s2i_root.go:36]: TODO: figure out why we aren't properly denying this, also consider whether we still need to deny it
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-builds][Feature:Builds] s2i build with a root user image
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-builds][Feature:Builds] s2i build with a root user image
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:17:46.496: INFO: configPath is now "/tmp/configfile3534416200"
Oct 13 10:17:46.496: INFO: The user is now "e2e-test-s2i-build-root-vpj8g-user"
Oct 13 10:17:46.496: INFO: Creating project "e2e-test-s2i-build-root-vpj8g"
Oct 13 10:17:46.705: INFO: Waiting on permissions in project "e2e-test-s2i-build-root-vpj8g" ...
Oct 13 10:17:46.711: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:17:46.824: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:17:46.933: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:17:47.087: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:17:47.097: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:17:47.109: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:17:47.870: INFO: Project "e2e-test-s2i-build-root-vpj8g" has been fully provisioned.
[It] should create a root build and fail without a privileged SCC [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/builds/s2i_root.go:35
[AfterEach] [sig-builds][Feature:Builds] s2i build with a root user image
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:17:48.040: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-s2i-build-root-vpj8g-user}, err: <nil>
Oct 13 10:17:48.129: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-s2i-build-root-vpj8g}, err: <nil>
Oct 13 10:17:48.222: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~DpNKgoRTS0zpC9G7-gcQ2o2Y9QXBbkpkuz-y0VJTvHQ}, err: <nil>
[AfterEach] [sig-builds][Feature:Builds] s2i build with a root user image
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-s2i-build-root-vpj8g" for this suite.
skip [github.com/openshift/origin/test/extended/builds/s2i_root.go:36]: TODO: figure out why we aren't properly denying this, also consider whether we still need to deny it

Stderr
_sig-api-machinery__Feature_ClusterResourceQuota__Cluster_resource_quota_should_control_resource_limits_across_namespaces__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 16.0s

_sig-instrumentation__sig-builds__Feature_Builds__Prometheus_when_installed_on_the_cluster_should_start_and_expose_a_secured_proxy_and_verify_build_metrics__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 174.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:83]: Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "promQL query returned unexpected results:\nopenshift_build_total{phase=\"Complete\"} >= 0\n[]",
        },
    ]
    promQL query returned unexpected results:
    openshift_build_total{phase="Complete"} >= 0
    []
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:27
[It] should start and expose a secured proxy and verify build metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:43
Oct 13 10:17:32.453: INFO: configPath is now "/tmp/configfile2484115495"
Oct 13 10:17:32.453: INFO: The user is now "e2e-test-prometheus-dcjzj-user"
Oct 13 10:17:32.454: INFO: Creating project "e2e-test-prometheus-dcjzj"
Oct 13 10:17:32.656: INFO: Waiting on permissions in project "e2e-test-prometheus-dcjzj" ...
Oct 13 10:17:32.665: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:17:32.773: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:17:33.028: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:17:33.106: INFO: Waiting for service account "default" secrets (default-token-w4mf7) to include dockercfg/token ...
Oct 13 10:17:33.174: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:17:33.284: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:17:33.392: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:17:33.412: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:17:33.422: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:17:34.008: INFO: Project "e2e-test-prometheus-dcjzj" has been fully provisioned.
Oct 13 10:17:34.013: INFO: Creating new exec pod
STEP: verifying the oauth-proxy reports a 403 on the root URL
Oct 13 10:18:26.767: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl -k -s -o /dev/null -w '%{http_code}' "https://thanos-querier.openshift-monitoring.svc:9091"'
Oct 13 10:18:27.266: INFO: stderr: "+ curl -k -s -o /dev/null -w '%{http_code}' https://thanos-querier.openshift-monitoring.svc:9091\n"
Oct 13 10:18:27.266: INFO: stdout: "403"
STEP: verifying a service account token is able to authenticate
Oct 13 10:18:27.266: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' -o /dev/null -w '%{http_code}' "https://thanos-querier.openshift-monitoring.svc:9091/graph"'
Oct 13 10:18:27.672: INFO: stderr: "+ curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' -o /dev/null -w '%{http_code}' https://thanos-querier.openshift-monitoring.svc:9091/graph\n"
Oct 13 10:18:27.672: INFO: stdout: "200"
STEP: calling oc create -f /tmp/fixture-testdata-dir1019657252/test/extended/testdata/builds/build-pruning/successful-build-config.yaml 
Oct 13 10:18:27.672: INFO: Running 'oc --kubeconfig=/tmp/configfile2484115495 create -f /tmp/fixture-testdata-dir1019657252/test/extended/testdata/builds/build-pruning/successful-build-config.yaml'
W1013 10:18:27.752672   81811 shim_kubectl.go:55] Using non-groupfied API resources is deprecated and will be removed in a future release, update apiVersion to "build.openshift.io/v1" for your resource
buildconfig.build.openshift.io/myphp created
STEP: start build
Oct 13 10:18:27.840: INFO: Running 'oc --kubeconfig=/tmp/configfile2484115495 start-build myphp -o=name'
Oct 13 10:18:28.052: INFO: 

start-build output with args [myphp -o=name]:
Error><nil>
StdOut>
build.build.openshift.io/myphp-1
StdErr>



STEP: verifying build completed successfully
Oct 13 10:18:28.054: INFO: Waiting for myphp-1 to complete

Oct 13 10:19:24.099: INFO: Done waiting for myphp-1: util.BuildResult{BuildPath:"build.build.openshift.io/myphp-1", BuildName:"myphp-1", StartBuildStdErr:"", StartBuildStdOut:"build.build.openshift.io/myphp-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*v1.Build)(0xc001f76380), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc001af5440)}
 with error: <nil>

STEP: verifying a service account token is able to query terminal build metrics from the Prometheus API
STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0
Oct 13 10:19:24.100: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"'
Oct 13 10:19:24.499: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n"
Oct 13 10:19:24.499: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0
Oct 13 10:19:34.500: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"'
Oct 13 10:19:34.873: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n"
Oct 13 10:19:34.873: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0
Oct 13 10:19:44.874: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"'
Oct 13 10:19:45.217: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n"
Oct 13 10:19:45.217: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0
Oct 13 10:19:55.218: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"'
Oct 13 10:19:55.604: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n"
Oct 13 10:19:55.604: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0
Oct 13 10:20:05.605: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"'
Oct 13 10:20:06.004: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n"
Oct 13 10:20:06.004: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-dcjzj".
STEP: Found 15 events.
Oct 13 10:20:16.068: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-dcjzj/execpod to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:20:16.068: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for myphp-1-build: { } Scheduled: Successfully assigned e2e-test-prometheus-dcjzj/myphp-1-build to ostest-n5rnf-worker-0-8kq82
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.187.17/23] from kuryr
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {multus } AddedInterface: Add eth0 [10.128.186.69/23] from kuryr
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container manage-dockerfile
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container manage-dockerfile
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:49 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container docker-build
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:49 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container docker-build
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:50 +0000 UTC - event for myphp-1: {build-controller } BuildStarted: Build e2e-test-prometheus-dcjzj/myphp-1 is now running
Oct 13 10:20:16.068: INFO: At 2022-10-13 10:19:19 +0000 UTC - event for myphp-1: {build-controller } BuildCompleted: Build e2e-test-prometheus-dcjzj/myphp-1 completed successfully
Oct 13 10:20:16.077: INFO: POD            NODE                         PHASE      GRACE  CONDITIONS
Oct 13 10:20:16.077: INFO: execpod        ostest-n5rnf-worker-0-j4pkp  Running    1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:34 +0000 UTC  }]
Oct 13 10:20:16.078: INFO: myphp-1-build  ostest-n5rnf-worker-0-8kq82  Succeeded         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:48 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:17 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:17 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:28 +0000 UTC  }]
Oct 13 10:20:16.078: INFO: 
Oct 13 10:20:16.091: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:20:16.129: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-prometheus-dcjzj-user}, err: <nil>
Oct 13 10:20:16.167: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-prometheus-dcjzj}, err: <nil>
Oct 13 10:20:16.182: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~DzUTs0iWE0gz2vesoS3bvZCiTtPeo6t3oFXhJFU28AQ}, err: <nil>
[AfterEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-dcjzj" for this suite.
[AfterEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:35
Oct 13 10:20:16.198: INFO: Dumping pod state for namespace openshift-monitoring
Oct 13 10:20:16.198: INFO: Running 'oc --kubeconfig=.kube/config get pods -n openshift-monitoring -o yaml'
Oct 13 10:20:16.551: INFO: apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.161"
            ],
            "mac": "fa:16:3e:67:65:2e",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.161"
            ],
            "mac": "fa:16:3e:67:65:2e",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: alertmanager
      openshift.io/scc: nonroot
    creationTimestamp: "2022-10-11T16:30:08Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: alertmanager-main-
    labels:
      alertmanager: main
      app: alertmanager
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/managed-by: prometheus-operator
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 0.22.2
      controller-revision-hash: alertmanager-main-78c6b7cbfb
      statefulset.kubernetes.io/pod-name: alertmanager-main-0
    name: alertmanager-main-0
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: alertmanager-main
      uid: f8b4c687-5618-400d-b669-305f7d140ea2
    resourceVersion: "62295"
    uid: 0ba17a85-c575-4eef-ac90-9d8610a62ff3
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/component: alert-router
                app.kubernetes.io/name: alertmanager
                app.kubernetes.io/part-of: openshift-monitoring
            namespaces:
            - openshift-monitoring
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - --config.file=/etc/alertmanager/config/alertmanager.yaml
      - --storage.path=/alertmanager
      - --data.retention=120h
      - --cluster.listen-address=[$(POD_IP)]:9094
      - --web.listen-address=127.0.0.1:9093
      - --web.external-url=https://alertmanager-main-openshift-monitoring.apps.ostest.shiftstack.com/
      - --web.route-prefix=/
      - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094
      - --cluster.peer=alertmanager-main-1.alertmanager-operated:9094
      - --cluster.peer=alertmanager-main-2.alertmanager-operated:9094
      - --cluster.reconnect-timeout=5m
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      imagePullPolicy: IfNotPresent
      name: alertmanager
      ports:
      - containerPort: 9094
        name: mesh-tcp
        protocol: TCP
      - containerPort: 9094
        name: mesh-udp
        protocol: UDP
      resources:
        requests:
          cpu: 4m
          memory: 40Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/alertmanager/config
        name: config-volume
      - mountPath: /etc/alertmanager/certs
        name: tls-assets
        readOnly: true
      - mountPath: /alertmanager
        name: alertmanager-main-db
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls
        name: secret-alertmanager-main-tls
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy
        name: secret-alertmanager-main-proxy
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
        readOnly: true
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: alertmanager-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-75ndq
        readOnly: true
    - args:
      - --listen-address=localhost:8080
      - --reload-url=http://localhost:9093/-/reload
      - --watched-dir=/etc/alertmanager/config
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "-1"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: config-reloader
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/alertmanager/config
        name: config-volume
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls
        name: secret-alertmanager-main-tls
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy
        name: secret-alertmanager-main-proxy
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-75ndq
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9095
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9093
      - '-openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers",
        "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring",
        "verb": "patch", "resourceName": "non-existant"}]'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"},
        "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace":
        "openshift-monitoring", "verb": "patch", "name": "non-existant"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-service-account=alertmanager-main
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      name: alertmanager-proxy
      ports:
      - containerPort: 9095
        name: web
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-alertmanager-main-tls
      - mountPath: /etc/proxy/secrets
        name: secret-alertmanager-main-proxy
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: alertmanager-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-75ndq
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9096
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --v=10
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
      - mountPath: /etc/tls/private
        name: secret-alertmanager-main-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-75ndq
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9096
      - --upstream=http://127.0.0.1:9093
      - --label=namespace
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-75ndq
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostname: alertmanager-main-0
    imagePullSecrets:
    - name: alertmanager-main-dockercfg-b785d
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: alertmanager-main
    serviceAccountName: alertmanager-main
    subdomain: alertmanager-operated
    terminationGracePeriodSeconds: 120
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: config-volume
      secret:
        defaultMode: 420
        secretName: alertmanager-main-generated
    - name: tls-assets
      secret:
        defaultMode: 420
        secretName: alertmanager-main-tls-assets
    - name: secret-alertmanager-main-tls
      secret:
        defaultMode: 420
        secretName: alertmanager-main-tls
    - name: secret-alertmanager-main-proxy
      secret:
        defaultMode: 420
        secretName: alertmanager-main-proxy
    - name: secret-alertmanager-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: alertmanager-kube-rbac-proxy
    - emptyDir: {}
      name: alertmanager-main-db
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: alertmanager-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: alertmanager-trusted-ca-bundle
    - name: kube-api-access-75ndq
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:09Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:30Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:30Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:08Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://2c9dcfd6ff72bb1a3aac33b967479d1bf17da0911acaada66f7ee25938f4f973
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      lastState: {}
      name: alertmanager
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:28Z"
    - containerID: cri-o://c73085e1f0c21e8cbf861fa42d414ee13fac9636a43a6ae27715cae491fbacb2
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: alertmanager-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:29Z"
    - containerID: cri-o://bec2afaece9da480c2297ff78358bcc3fbac33847189692589310eb7e243de93
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: config-reloader
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:29Z"
    - containerID: cri-o://0753f97687e0d3fa23ec28e8f92d5bfbbfc205aa76d51a8212a26b525a62de9a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:29Z"
    - containerID: cri-o://93ba2aa6f1ebd510c3cc6674ecc1ed6416c2e264603432727f8c15c339d9dc1f
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:30Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.23.161
    podIPs:
    - ip: 10.128.23.161
    qosClass: Burstable
    startTime: "2022-10-11T16:30:09Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.112"
            ],
            "mac": "fa:16:3e:ac:eb:00",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.112"
            ],
            "mac": "fa:16:3e:ac:eb:00",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: alertmanager
      openshift.io/scc: nonroot
    creationTimestamp: "2022-10-11T16:30:09Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: alertmanager-main-
    labels:
      alertmanager: main
      app: alertmanager
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/managed-by: prometheus-operator
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 0.22.2
      controller-revision-hash: alertmanager-main-78c6b7cbfb
      statefulset.kubernetes.io/pod-name: alertmanager-main-1
    name: alertmanager-main-1
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: alertmanager-main
      uid: f8b4c687-5618-400d-b669-305f7d140ea2
    resourceVersion: "62270"
    uid: 02c4ad64-a941-442b-9c8b-620db031f91a
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/component: alert-router
                app.kubernetes.io/name: alertmanager
                app.kubernetes.io/part-of: openshift-monitoring
            namespaces:
            - openshift-monitoring
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - --config.file=/etc/alertmanager/config/alertmanager.yaml
      - --storage.path=/alertmanager
      - --data.retention=120h
      - --cluster.listen-address=[$(POD_IP)]:9094
      - --web.listen-address=127.0.0.1:9093
      - --web.external-url=https://alertmanager-main-openshift-monitoring.apps.ostest.shiftstack.com/
      - --web.route-prefix=/
      - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094
      - --cluster.peer=alertmanager-main-1.alertmanager-operated:9094
      - --cluster.peer=alertmanager-main-2.alertmanager-operated:9094
      - --cluster.reconnect-timeout=5m
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      imagePullPolicy: IfNotPresent
      name: alertmanager
      ports:
      - containerPort: 9094
        name: mesh-tcp
        protocol: TCP
      - containerPort: 9094
        name: mesh-udp
        protocol: UDP
      resources:
        requests:
          cpu: 4m
          memory: 40Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/alertmanager/config
        name: config-volume
      - mountPath: /etc/alertmanager/certs
        name: tls-assets
        readOnly: true
      - mountPath: /alertmanager
        name: alertmanager-main-db
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls
        name: secret-alertmanager-main-tls
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy
        name: secret-alertmanager-main-proxy
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
        readOnly: true
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: alertmanager-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-fxcmh
        readOnly: true
    - args:
      - --listen-address=localhost:8080
      - --reload-url=http://localhost:9093/-/reload
      - --watched-dir=/etc/alertmanager/config
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "-1"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: config-reloader
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/alertmanager/config
        name: config-volume
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls
        name: secret-alertmanager-main-tls
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy
        name: secret-alertmanager-main-proxy
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-fxcmh
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9095
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9093
      - '-openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers",
        "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring",
        "verb": "patch", "resourceName": "non-existant"}]'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"},
        "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace":
        "openshift-monitoring", "verb": "patch", "name": "non-existant"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-service-account=alertmanager-main
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      name: alertmanager-proxy
      ports:
      - containerPort: 9095
        name: web
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-alertmanager-main-tls
      - mountPath: /etc/proxy/secrets
        name: secret-alertmanager-main-proxy
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: alertmanager-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-fxcmh
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9096
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --v=10
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
      - mountPath: /etc/tls/private
        name: secret-alertmanager-main-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-fxcmh
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9096
      - --upstream=http://127.0.0.1:9093
      - --label=namespace
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-fxcmh
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostname: alertmanager-main-1
    imagePullSecrets:
    - name: alertmanager-main-dockercfg-b785d
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: alertmanager-main
    serviceAccountName: alertmanager-main
    subdomain: alertmanager-operated
    terminationGracePeriodSeconds: 120
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: config-volume
      secret:
        defaultMode: 420
        secretName: alertmanager-main-generated
    - name: tls-assets
      secret:
        defaultMode: 420
        secretName: alertmanager-main-tls-assets
    - name: secret-alertmanager-main-tls
      secret:
        defaultMode: 420
        secretName: alertmanager-main-tls
    - name: secret-alertmanager-main-proxy
      secret:
        defaultMode: 420
        secretName: alertmanager-main-proxy
    - name: secret-alertmanager-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: alertmanager-kube-rbac-proxy
    - emptyDir: {}
      name: alertmanager-main-db
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: alertmanager-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: alertmanager-trusted-ca-bundle
    - name: kube-api-access-fxcmh
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:09Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:29Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:29Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:09Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://e366e9418471733e9646d38f8002bde25fc9418fd8ee0ee88520f1762496e02b
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      lastState: {}
      name: alertmanager
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:25Z"
    - containerID: cri-o://98a4290d0c4eb18ebe95954ae1df3f5918a709a3a86ef465e0b0e9349caf8c77
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: alertmanager-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:26Z"
    - containerID: cri-o://fffeabe3dfa30d557255c407401c645a2a5693cdba786b0847e21ebd959a2a02
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: config-reloader
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:26Z"
    - containerID: cri-o://10b9b9bcb478411359a06ddd0fec2974ee46ba41a895f2818ce1421ec9a42931
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:28Z"
    - containerID: cri-o://88b08f0610a8357f4e4f78ce0030241d16e4109d85994c819482d5547277838e
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:28Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.22.112
    podIPs:
    - ip: 10.128.22.112
    qosClass: Burstable
    startTime: "2022-10-11T16:30:09Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.138"
            ],
            "mac": "fa:16:3e:d9:01:ce",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.138"
            ],
            "mac": "fa:16:3e:d9:01:ce",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: alertmanager
      openshift.io/scc: nonroot
    creationTimestamp: "2022-10-11T16:30:09Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: alertmanager-main-
    labels:
      alertmanager: main
      app: alertmanager
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/managed-by: prometheus-operator
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 0.22.2
      controller-revision-hash: alertmanager-main-78c6b7cbfb
      statefulset.kubernetes.io/pod-name: alertmanager-main-2
    name: alertmanager-main-2
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: alertmanager-main
      uid: f8b4c687-5618-400d-b669-305f7d140ea2
    resourceVersion: "62077"
    uid: 5be3b096-5513-4dec-92ac-ea79e3e74e38
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/component: alert-router
                app.kubernetes.io/name: alertmanager
                app.kubernetes.io/part-of: openshift-monitoring
            namespaces:
            - openshift-monitoring
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - --config.file=/etc/alertmanager/config/alertmanager.yaml
      - --storage.path=/alertmanager
      - --data.retention=120h
      - --cluster.listen-address=[$(POD_IP)]:9094
      - --web.listen-address=127.0.0.1:9093
      - --web.external-url=https://alertmanager-main-openshift-monitoring.apps.ostest.shiftstack.com/
      - --web.route-prefix=/
      - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094
      - --cluster.peer=alertmanager-main-1.alertmanager-operated:9094
      - --cluster.peer=alertmanager-main-2.alertmanager-operated:9094
      - --cluster.reconnect-timeout=5m
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      imagePullPolicy: IfNotPresent
      name: alertmanager
      ports:
      - containerPort: 9094
        name: mesh-tcp
        protocol: TCP
      - containerPort: 9094
        name: mesh-udp
        protocol: UDP
      resources:
        requests:
          cpu: 4m
          memory: 40Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/alertmanager/config
        name: config-volume
      - mountPath: /etc/alertmanager/certs
        name: tls-assets
        readOnly: true
      - mountPath: /alertmanager
        name: alertmanager-main-db
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls
        name: secret-alertmanager-main-tls
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy
        name: secret-alertmanager-main-proxy
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
        readOnly: true
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: alertmanager-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6jzhb
        readOnly: true
    - args:
      - --listen-address=localhost:8080
      - --reload-url=http://localhost:9093/-/reload
      - --watched-dir=/etc/alertmanager/config
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy
      - --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "-1"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: config-reloader
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/alertmanager/config
        name: config-volume
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls
        name: secret-alertmanager-main-tls
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy
        name: secret-alertmanager-main-proxy
        readOnly: true
      - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6jzhb
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9095
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9093
      - '-openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers",
        "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring",
        "verb": "patch", "resourceName": "non-existant"}]'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"},
        "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace":
        "openshift-monitoring", "verb": "patch", "name": "non-existant"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-service-account=alertmanager-main
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      name: alertmanager-proxy
      ports:
      - containerPort: 9095
        name: web
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-alertmanager-main-tls
      - mountPath: /etc/proxy/secrets
        name: secret-alertmanager-main-proxy
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: alertmanager-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6jzhb
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9096
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --v=10
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/kube-rbac-proxy
        name: secret-alertmanager-kube-rbac-proxy
      - mountPath: /etc/tls/private
        name: secret-alertmanager-main-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6jzhb
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9096
      - --upstream=http://127.0.0.1:9093
      - --label=namespace
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6jzhb
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostname: alertmanager-main-2
    imagePullSecrets:
    - name: alertmanager-main-dockercfg-b785d
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: alertmanager-main
    serviceAccountName: alertmanager-main
    subdomain: alertmanager-operated
    terminationGracePeriodSeconds: 120
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: config-volume
      secret:
        defaultMode: 420
        secretName: alertmanager-main-generated
    - name: tls-assets
      secret:
        defaultMode: 420
        secretName: alertmanager-main-tls-assets
    - name: secret-alertmanager-main-tls
      secret:
        defaultMode: 420
        secretName: alertmanager-main-tls
    - name: secret-alertmanager-main-proxy
      secret:
        defaultMode: 420
        secretName: alertmanager-main-proxy
    - name: secret-alertmanager-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: alertmanager-kube-rbac-proxy
    - emptyDir: {}
      name: alertmanager-main-db
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: alertmanager-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: alertmanager-trusted-ca-bundle
    - name: kube-api-access-6jzhb
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:09Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:14Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:14Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:09Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://fd6940ed75d13e58641fb3c2625a74f1444f998c57011e96a3664f1887f54afa
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      lastState: {}
      name: alertmanager
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:48Z"
    - containerID: cri-o://deef806f883f372089822366aa7ea339fe6d225a75b6371e90d53c7502a1949e
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: alertmanager-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:02Z"
    - containerID: cri-o://c64ab4656d7a8fbca79b3b3553464fcc721387667879bec2d3ad83496e133a78
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: config-reloader
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:48Z"
    - containerID: cri-o://e6e5a3a23d8d54102c2f5cf0d2e9da477fd2ee238ca40a7b0bd3d83244c07a6b
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:02Z"
    - containerID: cri-o://e4062977155fa4dfe12941f515f944c73e386a9d8b5cef335d6f033fc3f0a57f
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:13Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.23.138
    podIPs:
    - ip: 10.128.23.138
    qosClass: Burstable
    startTime: "2022-10-11T16:30:09Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.49"
            ],
            "mac": "fa:16:3e:5b:b3:60",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.49"
            ],
            "mac": "fa:16:3e:5b:b3:60",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:09:08Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: cluster-monitoring-operator-79d65bfd5b-
    labels:
      app: cluster-monitoring-operator
      pod-template-hash: 79d65bfd5b
    name: cluster-monitoring-operator-79d65bfd5b-pntd6
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cluster-monitoring-operator-79d65bfd5b
      uid: 6c319834-bf5f-411b-a63c-b07c34d9783d
    resourceVersion: "8726"
    uid: 83ae671b-d09b-4541-b74f-673d9bbdf563
  spec:
    containers:
    - args:
      - --logtostderr
      - --secure-listen-address=:8443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:8080/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 8443
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: cluster-monitoring-operator-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6544w
        readOnly: true
    - args:
      - -namespace=openshift-monitoring
      - -namespace-user-workload=openshift-user-workload-monitoring
      - -configmap=cluster-monitoring-config
      - -release-version=$(RELEASE_VERSION)
      - -logtostderr=true
      - -v=2
      - -images=prometheus-operator=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5
      - -images=prometheus-config-reloader=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      - -images=configmap-reloader=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef81374b8f5eeb48afccfcd316f6fe440b8628a2b7d0784c5326419771f368a1
      - -images=prometheus=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      - -images=alertmanager=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437
      - -images=grafana=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c
      - -images=oauth-proxy=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      - -images=node-exporter=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      - -images=kube-state-metrics=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16
      - -images=openshift-state-metrics=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5
      - -images=kube-rbac-proxy=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      - -images=telemeter-client=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371
      - -images=prom-label-proxy=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      - -images=k8s-prometheus-adapter=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      - -images=thanos=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      env:
      - name: RELEASE_VERSION
        value: 4.9.0-0.nightly-2022-10-10-022606
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a098108c7f005b4a61829b504ab09fd1af8039f293c68474d2420284fcd467d6
      imagePullPolicy: IfNotPresent
      name: cluster-monitoring-operator
      resources:
        requests:
          cpu: 10m
          memory: 75Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/cluster-monitoring-operator/telemetry
        name: telemetry-config
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6544w
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: ostest-n5rnf-master-0
    nodeSelector:
      kubernetes.io/os: linux
      node-role.kubernetes.io/master: ""
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: cluster-monitoring-operator
    serviceAccountName: cluster-monitoring-operator
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 120
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 120
    volumes:
    - configMap:
        defaultMode: 420
        name: telemetry-config
      name: telemetry-config
    - name: cluster-monitoring-operator-tls
      secret:
        defaultMode: 420
        optional: true
        secretName: cluster-monitoring-operator-tls
    - name: kube-api-access-6544w
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:12:18Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:35Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:35Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:12:17Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://d9bc38f29bf1f312876371c81edaff39007954ef588d63610656e38378b1929e
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a098108c7f005b4a61829b504ab09fd1af8039f293c68474d2420284fcd467d6
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a098108c7f005b4a61829b504ab09fd1af8039f293c68474d2420284fcd467d6
      lastState: {}
      name: cluster-monitoring-operator
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:14:09Z"
    - containerID: cri-o://0e192890a816235784b71f17ee1d0b73c3e92e989e7481491719e4ee0206fd0a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState:
        terminated:
          containerID: cri-o://e1cb7a10016bca43327e253a74f4b6b2546cdf3284a847b77a0d08b71247c34a
          exitCode: 255
          finishedAt: "2022-10-11T16:14:54Z"
          message: "I1011 16:14:54.418031       1 main.go:181] Valid token audiences:
            \nI1011 16:14:54.418189       1 main.go:305] Reading certificate files\nF1011
            16:14:54.418229       1 main.go:309] Failed to initialize certificate
            reloader: error loading certificates: error loading certificate: open
            /etc/tls/private/tls.crt: no such file or directory\ngoroutine 1 [running]:\nk8s.io/klog/v2.stacks(0xc0000c4001,
            0xc0004f6000, 0xc6, 0x1c8)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996
            +0xb9\nk8s.io/klog/v2.(*loggingT).output(0x229c320, 0xc000000003, 0x0,
            0x0, 0xc0001e4770, 0x1c0063b, 0x7, 0x135, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945
            +0x191\nk8s.io/klog/v2.(*loggingT).printf(0x229c320, 0x3, 0x0, 0x0, 0x176d0d9,
            0x2d, 0xc000499c38, 0x1, 0x1)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733
            +0x17a\nk8s.io/klog/v2.Fatalf(...)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463\nmain.main()\n\t/go/src/github.com/brancz/kube-rbac-proxy/main.go:309
            +0x21f8\n\ngoroutine 18 [chan receive]:\nk8s.io/klog/v2.(*loggingT).flushDaemon(0x229c320)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131
            +0x8b\ncreated by k8s.io/klog/v2.init.0\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416
            +0xd8\n"
          reason: Error
          startedAt: "2022-10-11T16:14:54Z"
      name: kube-rbac-proxy
      ready: true
      restartCount: 4
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:35Z"
    hostIP: 10.196.0.105
    phase: Running
    podIP: 10.128.23.49
    podIPs:
    - ip: 10.128.23.49
    qosClass: Burstable
    startTime: "2022-10-11T16:12:18Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      checksum/grafana-config: bcf6fd722b2c76f194401f4b8e20d0af
      checksum/grafana-datasources: ae625c50302c7e8068dc3600dbd686cc
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.230"
            ],
            "mac": "fa:16:3e:d1:2a:fb",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.230"
            ],
            "mac": "fa:16:3e:d1:2a:fb",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:30:10Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: grafana-7c5c5fb5b6-
    labels:
      app.kubernetes.io/component: grafana
      app.kubernetes.io/name: grafana
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 7.5.11
      pod-template-hash: 7c5c5fb5b6
    name: grafana-7c5c5fb5b6-cht4p
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: grafana-7c5c5fb5b6
      uid: 779121cb-12a9-4091-a906-7df12c28c1b7
    resourceVersion: "61707"
    uid: 59162dd9-267d-4146-bca6-ddbdc3930d01
  spec:
    containers:
    - args:
      - -config=/etc/grafana/grafana.ini
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c
      imagePullPolicy: IfNotPresent
      name: grafana
      ports:
      - containerPort: 3001
        name: http
        protocol: TCP
      resources:
        requests:
          cpu: 4m
          memory: 64Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/lib/grafana
        name: grafana-storage
      - mountPath: /etc/grafana/provisioning/datasources
        name: grafana-datasources
      - mountPath: /etc/grafana/provisioning/dashboards
        name: grafana-dashboards
      - mountPath: /grafana-dashboard-definitions/0/cluster-total
        name: grafana-dashboard-cluster-total
      - mountPath: /grafana-dashboard-definitions/0/etcd
        name: grafana-dashboard-etcd
      - mountPath: /grafana-dashboard-definitions/0/k8s-resources-cluster
        name: grafana-dashboard-k8s-resources-cluster
      - mountPath: /grafana-dashboard-definitions/0/k8s-resources-namespace
        name: grafana-dashboard-k8s-resources-namespace
      - mountPath: /grafana-dashboard-definitions/0/k8s-resources-node
        name: grafana-dashboard-k8s-resources-node
      - mountPath: /grafana-dashboard-definitions/0/k8s-resources-pod
        name: grafana-dashboard-k8s-resources-pod
      - mountPath: /grafana-dashboard-definitions/0/k8s-resources-workload
        name: grafana-dashboard-k8s-resources-workload
      - mountPath: /grafana-dashboard-definitions/0/k8s-resources-workloads-namespace
        name: grafana-dashboard-k8s-resources-workloads-namespace
      - mountPath: /grafana-dashboard-definitions/0/namespace-by-pod
        name: grafana-dashboard-namespace-by-pod
      - mountPath: /grafana-dashboard-definitions/0/node-cluster-rsrc-use
        name: grafana-dashboard-node-cluster-rsrc-use
      - mountPath: /grafana-dashboard-definitions/0/node-rsrc-use
        name: grafana-dashboard-node-rsrc-use
      - mountPath: /grafana-dashboard-definitions/0/pod-total
        name: grafana-dashboard-pod-total
      - mountPath: /grafana-dashboard-definitions/0/prometheus
        name: grafana-dashboard-prometheus
      - mountPath: /etc/grafana
        name: grafana-config
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-pxsmk
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:3000
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:3001
      - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-service-account=grafana
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      name: grafana-proxy
      ports:
      - containerPort: 3000
        name: https
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        httpGet:
          path: /oauth/healthz
          port: https
          scheme: HTTPS
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-grafana-tls
      - mountPath: /etc/proxy/secrets
        name: secret-grafana-proxy
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: grafana-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-pxsmk
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: grafana-dockercfg-9vtxq
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: grafana
    serviceAccountName: grafana
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - emptyDir: {}
      name: grafana-storage
    - name: grafana-datasources
      secret:
        defaultMode: 420
        secretName: grafana-datasources
    - configMap:
        defaultMode: 420
        name: grafana-dashboards
      name: grafana-dashboards
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-cluster-total
      name: grafana-dashboard-cluster-total
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-etcd
      name: grafana-dashboard-etcd
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-k8s-resources-cluster
      name: grafana-dashboard-k8s-resources-cluster
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-k8s-resources-namespace
      name: grafana-dashboard-k8s-resources-namespace
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-k8s-resources-node
      name: grafana-dashboard-k8s-resources-node
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-k8s-resources-pod
      name: grafana-dashboard-k8s-resources-pod
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-k8s-resources-workload
      name: grafana-dashboard-k8s-resources-workload
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-k8s-resources-workloads-namespace
      name: grafana-dashboard-k8s-resources-workloads-namespace
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-namespace-by-pod
      name: grafana-dashboard-namespace-by-pod
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-node-cluster-rsrc-use
      name: grafana-dashboard-node-cluster-rsrc-use
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-node-rsrc-use
      name: grafana-dashboard-node-rsrc-use
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-pod-total
      name: grafana-dashboard-pod-total
    - configMap:
        defaultMode: 420
        name: grafana-dashboard-prometheus
      name: grafana-dashboard-prometheus
    - name: grafana-config
      secret:
        defaultMode: 420
        secretName: grafana-config
    - name: secret-grafana-tls
      secret:
        defaultMode: 420
        secretName: grafana-tls
    - name: secret-grafana-proxy
      secret:
        defaultMode: 420
        secretName: grafana-proxy
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: grafana-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: grafana-trusted-ca-bundle
    - name: kube-api-access-pxsmk
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:10Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:03Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:03Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:10Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://9e715273e4cebed3a936917501575a378dfbcc8b7f76aaeb5970fde74bad2ebc
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c
      lastState: {}
      name: grafana
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:02Z"
    - containerID: cri-o://dad6ae57fad580b2f39380be96742bce1def9f9079e1baf2fe8c0f52ac6071af
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: grafana-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:03Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.22.230
    podIPs:
    - ip: 10.128.22.230
    qosClass: Burstable
    startTime: "2022-10-11T16:30:10Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.45"
            ],
            "mac": "fa:16:3e:68:1e:0a",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.45"
            ],
            "mac": "fa:16:3e:68:1e:0a",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: kube-state-metrics
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:14:59Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: kube-state-metrics-754df74859-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: kube-state-metrics
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 2.0.0
      pod-template-hash: 754df74859
    name: kube-state-metrics-754df74859-w8k5h
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: kube-state-metrics-754df74859
      uid: 114e9015-6830-4d58-bb6d-cc5d7c4427af
    resourceVersion: "61212"
    uid: cb715a58-6c73-45b7-ad0e-f96ecd04c1e5
  spec:
    containers:
    - args:
      - --host=127.0.0.1
      - --port=8081
      - --telemetry-host=127.0.0.1
      - --telemetry-port=8082
      - --metric-denylist=kube_secret_labels
      - --metric-labels-allowlist=pods=[*],nodes=[*],namespaces=[*],persistentvolumes=[*],persistentvolumeclaims=[*]
      - |
        --metric-denylist=
        kube_.+_created,
        kube_.+_metadata_resource_version,
        kube_replicaset_metadata_generation,
        kube_replicaset_status_observed_generation,
        kube_pod_restart_policy,
        kube_pod_init_container_status_terminated,
        kube_pod_init_container_status_running,
        kube_pod_container_status_terminated,
        kube_pod_container_status_running,
        kube_pod_completion_time,
        kube_pod_status_scheduled
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16
      imagePullPolicy: IfNotPresent
      name: kube-state-metrics
      resources:
        requests:
          cpu: 2m
          memory: 80Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /tmp
        name: volume-directive-shadow
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-2k9gg
        readOnly: true
    - args:
      - --logtostderr
      - --secure-listen-address=:8443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:8081/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-main
      ports:
      - containerPort: 8443
        name: https-main
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: kube-state-metrics-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-2k9gg
        readOnly: true
    - args:
      - --logtostderr
      - --secure-listen-address=:9443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:8082/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-self
      ports:
      - containerPort: 9443
        name: https-self
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: kube-state-metrics-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-2k9gg
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: kube-state-metrics
    serviceAccountName: kube-state-metrics
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - emptyDir: {}
      name: volume-directive-shadow
    - name: kube-state-metrics-tls
      secret:
        defaultMode: 420
        secretName: kube-state-metrics-tls
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-2k9gg
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:38Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:38Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://2bc4e8a0a8586d3fb8d893efdc6953e8255fb2cc8696b28ff9f46a3601a39442
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-main
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:23Z"
    - containerID: cri-o://2d9cf2111e56c0641bbd9fbc36903c69e746944b1ee8bbe61d29cdd47d3adef0
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-self
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:36Z"
    - containerID: cri-o://e7e7a842a335cb2835376b93d73312c8ccb3783f186415f04953caa194604422
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16
      lastState: {}
      name: kube-state-metrics
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:22Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.22.45
    podIPs:
    - ip: 10.128.22.45
    qosClass: Burstable
    startTime: "2022-10-11T16:29:52Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: node-exporter
    creationTimestamp: "2022-10-11T16:29:42Z"
    generateName: node-exporter-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: node-exporter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 1.1.2
      controller-revision-hash: 7f9b7bd8b5
      pod-template-generation: "1"
    name: node-exporter-7cn6l
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: node-exporter
      uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b
    resourceVersion: "60893"
    uid: 6abaa413-0438-48a2-add5-04718c115244
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - ostest-n5rnf-worker-0-j4pkp
    containers:
    - args:
      - --web.listen-address=127.0.0.1:9100
      - --path.sysfs=/host/sys
      - --path.rootfs=/host/root
      - --no-collector.wifi
      - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
      - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
      - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
      - --collector.cpu.info
      - --collector.textfile.directory=/var/node_exporter/textfile
      - --no-collector.cpufreq
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: node-exporter
      resources:
        requests:
          cpu: 8m
          memory: 32Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /host/sys
        mountPropagation: HostToContainer
        name: sys
        readOnly: true
      - mountPath: /host/root
        mountPropagation: HostToContainer
        name: root
        readOnly: true
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-rn22c
        readOnly: true
      workingDir: /var/node_exporter/textfile
    - args:
      - --logtostderr
      - --secure-listen-address=[$(IP)]:9100
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:9100/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9100
        hostPort: 9100
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: node-exporter-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-rn22c
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    hostPID: true
    imagePullSecrets:
    - name: node-exporter-dockercfg-d64pg
    initContainers:
    - command:
      - /bin/sh
      - -c
      - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init
        -perm /111 -type f -exec {} \;'
      env:
      - name: TMPDIR
        value: /tmp
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: init-textfile
      resources:
        requests:
          cpu: 1m
          memory: 1Mi
      securityContext:
        privileged: true
        runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
      - mountPath: /var/log/wtmp
        name: node-exporter-wtmp
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-rn22c
        readOnly: true
      workingDir: /var/node_exporter/textfile
    nodeName: ostest-n5rnf-worker-0-j4pkp
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: node-exporter
    serviceAccountName: node-exporter
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    volumes:
    - hostPath:
        path: /sys
        type: ""
      name: sys
    - hostPath:
        path: /
        type: ""
      name: root
    - emptyDir: {}
      name: node-exporter-textfile
    - name: node-exporter-tls
      secret:
        defaultMode: 420
        secretName: node-exporter-tls
    - hostPath:
        path: /var/log/wtmp
        type: File
      name: node-exporter-wtmp
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-rn22c
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:23Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:23Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:42Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://7ef6ac436d272d70676ed277caef23f19c00c8417a2bc96126e6700fa76d6feb
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:23Z"
    - containerID: cri-o://8bce7ab90066cc6dc9fe7a5f6459772c1ba2c8c4e057583ab8e8d4f8707eb36a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: node-exporter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:00Z"
    hostIP: 10.196.0.199
    initContainerStatuses:
    - containerID: cri-o://5404ad006e61510210f3f1ee208b588d3cdd985728da5a937026c0c3d61fa5fa
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: init-textfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://5404ad006e61510210f3f1ee208b588d3cdd985728da5a937026c0c3d61fa5fa
          exitCode: 0
          finishedAt: "2022-10-11T16:29:52Z"
          reason: Completed
          startedAt: "2022-10-11T16:29:51Z"
    phase: Running
    podIP: 10.196.0.199
    podIPs:
    - ip: 10.196.0.199
    qosClass: Burstable
    startTime: "2022-10-11T16:29:43Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: node-exporter
    creationTimestamp: "2022-10-11T16:31:11Z"
    generateName: node-exporter-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: node-exporter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 1.1.2
      controller-revision-hash: 7f9b7bd8b5
      pod-template-generation: "1"
    name: node-exporter-7n85z
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: node-exporter
      uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b
    resourceVersion: "62880"
    uid: e520f6ac-f247-4e36-a129-d0b4f724c1a3
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - ostest-n5rnf-worker-0-8kq82
    containers:
    - args:
      - --web.listen-address=127.0.0.1:9100
      - --path.sysfs=/host/sys
      - --path.rootfs=/host/root
      - --no-collector.wifi
      - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
      - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
      - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
      - --collector.cpu.info
      - --collector.textfile.directory=/var/node_exporter/textfile
      - --no-collector.cpufreq
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: node-exporter
      resources:
        requests:
          cpu: 8m
          memory: 32Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /host/sys
        mountPropagation: HostToContainer
        name: sys
        readOnly: true
      - mountPath: /host/root
        mountPropagation: HostToContainer
        name: root
        readOnly: true
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-7drvz
        readOnly: true
      workingDir: /var/node_exporter/textfile
    - args:
      - --logtostderr
      - --secure-listen-address=[$(IP)]:9100
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:9100/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9100
        hostPort: 9100
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: node-exporter-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-7drvz
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    hostPID: true
    imagePullSecrets:
    - name: node-exporter-dockercfg-d64pg
    initContainers:
    - command:
      - /bin/sh
      - -c
      - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init
        -perm /111 -type f -exec {} \;'
      env:
      - name: TMPDIR
        value: /tmp
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: init-textfile
      resources:
        requests:
          cpu: 1m
          memory: 1Mi
      securityContext:
        privileged: true
        runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
      - mountPath: /var/log/wtmp
        name: node-exporter-wtmp
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-7drvz
        readOnly: true
      workingDir: /var/node_exporter/textfile
    nodeName: ostest-n5rnf-worker-0-8kq82
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: node-exporter
    serviceAccountName: node-exporter
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    volumes:
    - hostPath:
        path: /sys
        type: ""
      name: sys
    - hostPath:
        path: /
        type: ""
      name: root
    - emptyDir: {}
      name: node-exporter-textfile
    - name: node-exporter-tls
      secret:
        defaultMode: 420
        secretName: node-exporter-tls
    - hostPath:
        path: /var/log/wtmp
        type: File
      name: node-exporter-wtmp
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-7drvz
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:57Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:32:10Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:32:10Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:12Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://68b6b4d6b4aa09b8e9ca3954cd9442da1a5d97db75730f3c1256d48aeeac1505
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:32:10Z"
    - containerID: cri-o://6cc945323e091d0db19e5a717fe18395e1ef45fef020dd6f6d93f8a6bdc705dd
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: node-exporter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:58Z"
    hostIP: 10.196.2.72
    initContainerStatuses:
    - containerID: cri-o://bba884c1b85e67cce00e3169715b99c67c94bfdf76c6e493e714680629b153d1
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: init-textfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://bba884c1b85e67cce00e3169715b99c67c94bfdf76c6e493e714680629b153d1
          exitCode: 0
          finishedAt: "2022-10-11T16:31:57Z"
          reason: Completed
          startedAt: "2022-10-11T16:31:57Z"
    phase: Running
    podIP: 10.196.2.72
    podIPs:
    - ip: 10.196.2.72
    qosClass: Burstable
    startTime: "2022-10-11T16:31:43Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: node-exporter
    creationTimestamp: "2022-10-11T16:14:59Z"
    generateName: node-exporter-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: node-exporter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 1.1.2
      controller-revision-hash: 7f9b7bd8b5
      pod-template-generation: "1"
    name: node-exporter-dlzvz
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: node-exporter
      uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b
    resourceVersion: "7424"
    uid: 053a3770-cf8f-4156-bd99-3d8ad58a3f16
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - ostest-n5rnf-master-1
    containers:
    - args:
      - --web.listen-address=127.0.0.1:9100
      - --path.sysfs=/host/sys
      - --path.rootfs=/host/root
      - --no-collector.wifi
      - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
      - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
      - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
      - --collector.cpu.info
      - --collector.textfile.directory=/var/node_exporter/textfile
      - --no-collector.cpufreq
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: node-exporter
      resources:
        requests:
          cpu: 8m
          memory: 32Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /host/sys
        mountPropagation: HostToContainer
        name: sys
        readOnly: true
      - mountPath: /host/root
        mountPropagation: HostToContainer
        name: root
        readOnly: true
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ldk97
        readOnly: true
      workingDir: /var/node_exporter/textfile
    - args:
      - --logtostderr
      - --secure-listen-address=[$(IP)]:9100
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:9100/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9100
        hostPort: 9100
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: node-exporter-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ldk97
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    hostPID: true
    initContainers:
    - command:
      - /bin/sh
      - -c
      - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init
        -perm /111 -type f -exec {} \;'
      env:
      - name: TMPDIR
        value: /tmp
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: init-textfile
      resources:
        requests:
          cpu: 1m
          memory: 1Mi
      securityContext:
        privileged: true
        runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
      - mountPath: /var/log/wtmp
        name: node-exporter-wtmp
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ldk97
        readOnly: true
      workingDir: /var/node_exporter/textfile
    nodeName: ostest-n5rnf-master-1
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: node-exporter
    serviceAccountName: node-exporter
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    volumes:
    - hostPath:
        path: /sys
        type: ""
      name: sys
    - hostPath:
        path: /
        type: ""
      name: root
    - emptyDir: {}
      name: node-exporter-textfile
    - name: node-exporter-tls
      secret:
        defaultMode: 420
        secretName: node-exporter-tls
    - hostPath:
        path: /var/log/wtmp
        type: File
      name: node-exporter-wtmp
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-ldk97
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:07Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:08Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:08Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:59Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://5373299b125193fa5b727225158ec0ab6a0250777a9c85ab33e3ea749e13dac9
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:07Z"
    - containerID: cri-o://d3d461cfa8b306c9cc0bd5cbb850d134aa35d7b1a48f3f34e5253fee6cfe9e5b
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: node-exporter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:07Z"
    hostIP: 10.196.3.178
    initContainerStatuses:
    - containerID: cri-o://dba0ea8292079f2252e506cfea37c6d5b090192b53ad2c9736889832e75144b5
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: init-textfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://dba0ea8292079f2252e506cfea37c6d5b090192b53ad2c9736889832e75144b5
          exitCode: 0
          finishedAt: "2022-10-11T16:15:06Z"
          reason: Completed
          startedAt: "2022-10-11T16:15:06Z"
    phase: Running
    podIP: 10.196.3.178
    podIPs:
    - ip: 10.196.3.178
    qosClass: Burstable
    startTime: "2022-10-11T16:14:59Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: node-exporter
    creationTimestamp: "2022-10-11T16:29:01Z"
    generateName: node-exporter-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: node-exporter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 1.1.2
      controller-revision-hash: 7f9b7bd8b5
      pod-template-generation: "1"
    name: node-exporter-fvjvs
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: node-exporter
      uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b
    resourceVersion: "59128"
    uid: 958a88c3-9530-40ea-93bc-364e7b008d04
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - ostest-n5rnf-worker-0-94fxs
    containers:
    - args:
      - --web.listen-address=127.0.0.1:9100
      - --path.sysfs=/host/sys
      - --path.rootfs=/host/root
      - --no-collector.wifi
      - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
      - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
      - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
      - --collector.cpu.info
      - --collector.textfile.directory=/var/node_exporter/textfile
      - --no-collector.cpufreq
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: node-exporter
      resources:
        requests:
          cpu: 8m
          memory: 32Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /host/sys
        mountPropagation: HostToContainer
        name: sys
        readOnly: true
      - mountPath: /host/root
        mountPropagation: HostToContainer
        name: root
        readOnly: true
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-4t982
        readOnly: true
      workingDir: /var/node_exporter/textfile
    - args:
      - --logtostderr
      - --secure-listen-address=[$(IP)]:9100
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:9100/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9100
        hostPort: 9100
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: node-exporter-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-4t982
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    hostPID: true
    imagePullSecrets:
    - name: node-exporter-dockercfg-d64pg
    initContainers:
    - command:
      - /bin/sh
      - -c
      - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init
        -perm /111 -type f -exec {} \;'
      env:
      - name: TMPDIR
        value: /tmp
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: init-textfile
      resources:
        requests:
          cpu: 1m
          memory: 1Mi
      securityContext:
        privileged: true
        runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
      - mountPath: /var/log/wtmp
        name: node-exporter-wtmp
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-4t982
        readOnly: true
      workingDir: /var/node_exporter/textfile
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: node-exporter
    serviceAccountName: node-exporter
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    volumes:
    - hostPath:
        path: /sys
        type: ""
      name: sys
    - hostPath:
        path: /
        type: ""
      name: root
    - emptyDir: {}
      name: node-exporter-textfile
    - name: node-exporter-tls
      secret:
        defaultMode: 420
        secretName: node-exporter-tls
    - hostPath:
        path: /var/log/wtmp
        type: File
      name: node-exporter-wtmp
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-4t982
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:10Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:26Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:26Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:02Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://f74b9cf71d559ebcde03172d54fb8a03dba5d82fdc1b9cc67b90d0c114bd3c49
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:29:26Z"
    - containerID: cri-o://fc83935f5205d1369f82c357893afec8b561f0101fea50dee1c92546ef6fe6f7
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: node-exporter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:29:10Z"
    hostIP: 10.196.2.169
    initContainerStatuses:
    - containerID: cri-o://a43e7f6354f638f721d6b91cf1d6809d487f411b25272d590874bd79b40ea251
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: init-textfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://a43e7f6354f638f721d6b91cf1d6809d487f411b25272d590874bd79b40ea251
          exitCode: 0
          finishedAt: "2022-10-11T16:29:10Z"
          reason: Completed
          startedAt: "2022-10-11T16:29:09Z"
    phase: Running
    podIP: 10.196.2.169
    podIPs:
    - ip: 10.196.2.169
    qosClass: Burstable
    startTime: "2022-10-11T16:29:02Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: node-exporter
    creationTimestamp: "2022-10-11T16:14:59Z"
    generateName: node-exporter-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: node-exporter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 1.1.2
      controller-revision-hash: 7f9b7bd8b5
      pod-template-generation: "1"
    name: node-exporter-g96tz
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: node-exporter
      uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b
    resourceVersion: "7398"
    uid: 238be02b-d34b-4005-94a3-e900dadfb56b
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - ostest-n5rnf-master-2
    containers:
    - args:
      - --web.listen-address=127.0.0.1:9100
      - --path.sysfs=/host/sys
      - --path.rootfs=/host/root
      - --no-collector.wifi
      - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
      - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
      - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
      - --collector.cpu.info
      - --collector.textfile.directory=/var/node_exporter/textfile
      - --no-collector.cpufreq
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: node-exporter
      resources:
        requests:
          cpu: 8m
          memory: 32Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /host/sys
        mountPropagation: HostToContainer
        name: sys
        readOnly: true
      - mountPath: /host/root
        mountPropagation: HostToContainer
        name: root
        readOnly: true
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-dg9wx
        readOnly: true
      workingDir: /var/node_exporter/textfile
    - args:
      - --logtostderr
      - --secure-listen-address=[$(IP)]:9100
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:9100/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9100
        hostPort: 9100
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: node-exporter-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-dg9wx
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    hostPID: true
    initContainers:
    - command:
      - /bin/sh
      - -c
      - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init
        -perm /111 -type f -exec {} \;'
      env:
      - name: TMPDIR
        value: /tmp
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: init-textfile
      resources:
        requests:
          cpu: 1m
          memory: 1Mi
      securityContext:
        privileged: true
        runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
      - mountPath: /var/log/wtmp
        name: node-exporter-wtmp
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-dg9wx
        readOnly: true
      workingDir: /var/node_exporter/textfile
    nodeName: ostest-n5rnf-master-2
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: node-exporter
    serviceAccountName: node-exporter
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    volumes:
    - hostPath:
        path: /sys
        type: ""
      name: sys
    - hostPath:
        path: /
        type: ""
      name: root
    - emptyDir: {}
      name: node-exporter-textfile
    - name: node-exporter-tls
      secret:
        defaultMode: 420
        secretName: node-exporter-tls
    - hostPath:
        path: /var/log/wtmp
        type: File
      name: node-exporter-wtmp
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-dg9wx
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:06Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:07Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:07Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:59Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://4515a68e11fbcf83c92ca4670136f5c0ed6c8070a8290f30e48612aaa652e8f3
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:06Z"
    - containerID: cri-o://2c585a82c9b96cb30ca8c16ed49abec4bc4a66d69d19369978173b2f2ea836c5
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: node-exporter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:06Z"
    hostIP: 10.196.3.187
    initContainerStatuses:
    - containerID: cri-o://d5cb7d9c128b19de4497b7ad6a16b1b8e4bc98326327c7d284b712e364afc31a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: init-textfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://d5cb7d9c128b19de4497b7ad6a16b1b8e4bc98326327c7d284b712e364afc31a
          exitCode: 0
          finishedAt: "2022-10-11T16:15:06Z"
          reason: Completed
          startedAt: "2022-10-11T16:15:06Z"
    phase: Running
    podIP: 10.196.3.187
    podIPs:
    - ip: 10.196.3.187
    qosClass: Burstable
    startTime: "2022-10-11T16:14:59Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: node-exporter
    creationTimestamp: "2022-10-11T16:14:59Z"
    generateName: node-exporter-
    labels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: node-exporter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 1.1.2
      controller-revision-hash: 7f9b7bd8b5
      pod-template-generation: "1"
    name: node-exporter-p5vmg
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: node-exporter
      uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b
    resourceVersion: "7818"
    uid: b8ff8622-729e-4729-a7e7-8697864e6d5a
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - ostest-n5rnf-master-0
    containers:
    - args:
      - --web.listen-address=127.0.0.1:9100
      - --path.sysfs=/host/sys
      - --path.rootfs=/host/root
      - --no-collector.wifi
      - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
      - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
      - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
      - --collector.cpu.info
      - --collector.textfile.directory=/var/node_exporter/textfile
      - --no-collector.cpufreq
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: node-exporter
      resources:
        requests:
          cpu: 8m
          memory: 32Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /host/sys
        mountPropagation: HostToContainer
        name: sys
        readOnly: true
      - mountPath: /host/root
        mountPropagation: HostToContainer
        name: root
        readOnly: true
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-l4vzn
        readOnly: true
      workingDir: /var/node_exporter/textfile
    - args:
      - --logtostderr
      - --secure-listen-address=[$(IP)]:9100
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:9100/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9100
        hostPort: 9100
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        runAsGroup: 65532
        runAsNonRoot: true
        runAsUser: 65532
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: node-exporter-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-l4vzn
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    hostPID: true
    initContainers:
    - command:
      - /bin/sh
      - -c
      - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init
        -perm /111 -type f -exec {} \;'
      env:
      - name: TMPDIR
        value: /tmp
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imagePullPolicy: IfNotPresent
      name: init-textfile
      resources:
        requests:
          cpu: 1m
          memory: 1Mi
      securityContext:
        privileged: true
        runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/node_exporter/textfile
        name: node-exporter-textfile
      - mountPath: /var/log/wtmp
        name: node-exporter-wtmp
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-l4vzn
        readOnly: true
      workingDir: /var/node_exporter/textfile
    nodeName: ostest-n5rnf-master-0
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: node-exporter
    serviceAccountName: node-exporter
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    volumes:
    - hostPath:
        path: /sys
        type: ""
      name: sys
    - hostPath:
        path: /
        type: ""
      name: root
    - emptyDir: {}
      name: node-exporter-textfile
    - name: node-exporter-tls
      secret:
        defaultMode: 420
        secretName: node-exporter-tls
    - hostPath:
        path: /var/log/wtmp
        type: File
      name: node-exporter-wtmp
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-l4vzn
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:12Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:13Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:15:13Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:59Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://f3450a061fd7c1856256b6a277071ae96f823b86648ea227e9b385b84b9beb33
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:13Z"
    - containerID: cri-o://f2654a5ddb3243c9c4bec1f33d5aa787d0479c1c74638d985503b7fb085660f5
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: node-exporter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:15:12Z"
    hostIP: 10.196.0.105
    initContainerStatuses:
    - containerID: cri-o://fd6867f1a4b365181be0913d90bb089fdd37800bf5c8d0a19a2f69459710ae56
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd
      lastState: {}
      name: init-textfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://fd6867f1a4b365181be0913d90bb089fdd37800bf5c8d0a19a2f69459710ae56
          exitCode: 0
          finishedAt: "2022-10-11T16:15:11Z"
          reason: Completed
          startedAt: "2022-10-11T16:15:11Z"
    phase: Running
    podIP: 10.196.0.105
    podIPs:
    - ip: 10.196.0.105
    qosClass: Burstable
    startTime: "2022-10-11T16:14:59Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.89"
            ],
            "mac": "fa:16:3e:88:c2:40",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.89"
            ],
            "mac": "fa:16:3e:88:c2:40",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:14:59Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: openshift-state-metrics-c59c784c4-
    labels:
      k8s-app: openshift-state-metrics
      pod-template-hash: c59c784c4
    name: openshift-state-metrics-c59c784c4-f5f7v
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: openshift-state-metrics-c59c784c4
      uid: e98067fb-b51e-4f67-bae7-2d67107bbb6d
    resourceVersion: "62759"
    uid: f3277e62-2a87-4978-8163-8b1023dc4f80
  spec:
    containers:
    - args:
      - --logtostderr
      - --secure-listen-address=:8443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:8081/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-main
      ports:
      - containerPort: 8443
        name: https-main
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/tls/private
        name: openshift-state-metrics-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6t86l
        readOnly: true
    - args:
      - --logtostderr
      - --secure-listen-address=:9443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=http://127.0.0.1:8082/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-self
      ports:
      - containerPort: 9443
        name: https-self
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/tls/private
        name: openshift-state-metrics-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6t86l
        readOnly: true
    - args:
      - --host=127.0.0.1
      - --port=8081
      - --telemetry-host=127.0.0.1
      - --telemetry-port=8082
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5
      imagePullPolicy: IfNotPresent
      name: openshift-state-metrics
      resources:
        requests:
          cpu: 1m
          memory: 32Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-6t86l
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: openshift-state-metrics
    serviceAccountName: openshift-state-metrics
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: openshift-state-metrics-tls
      secret:
        defaultMode: 420
        secretName: openshift-state-metrics-tls
    - name: kube-api-access-6t86l
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:32:01Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:32:01Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://24152310400c510959a71b9305b4b856a49b342c3cf5a553d58f5492b367432a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-main
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:47Z"
    - containerID: cri-o://9095dc1a211202ee760c13c86dda869eb8eaf5925be748d567fddf853dc01e80
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-self
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:47Z"
    - containerID: cri-o://3f17d69b2b40ed701829b086a69ea9f6e380b6a6fd584e7fbc34d3dfb736dc0e
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5
      lastState: {}
      name: openshift-state-metrics
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:32:01Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.22.89
    podIPs:
    - ip: 10.128.22.89
    qosClass: Burstable
    startTime: "2022-10-11T16:29:52Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.77"
            ],
            "mac": "fa:16:3e:2f:75:3e",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.77"
            ],
            "mac": "fa:16:3e:2f:75:3e",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-12T16:07:54Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: prometheus-adapter-86cfd468f7-
    labels:
      app.kubernetes.io/component: metrics-adapter
      app.kubernetes.io/name: prometheus-adapter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 0.9.0
      pod-template-hash: 86cfd468f7
    name: prometheus-adapter-86cfd468f7-blrxn
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: prometheus-adapter-86cfd468f7
      uid: 23d342f4-13a5-46b1-94b2-e71701e2ca51
    resourceVersion: "478940"
    uid: 2f70ccee-4ec5-4082-bc22-22487e4f5ab9
  spec:
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/component: metrics-adapter
              app.kubernetes.io/name: prometheus-adapter
              app.kubernetes.io/part-of: openshift-monitoring
          namespaces:
          - openshift-monitoring
          topologyKey: kubernetes.io/hostname
    containers:
    - args:
      - --prometheus-auth-config=/etc/prometheus-config/prometheus-config.yaml
      - --config=/etc/adapter/config.yaml
      - --logtostderr=true
      - --metrics-relist-interval=1m
      - --prometheus-url=https://prometheus-k8s.openshift-monitoring.svc:9091
      - --secure-port=6443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --client-ca-file=/etc/tls/private/client-ca-file
      - --requestheader-client-ca-file=/etc/tls/private/requestheader-client-ca-file
      - --requestheader-allowed-names=kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator
      - --requestheader-extra-headers-prefix=X-Remote-Extra-
      - --requestheader-group-headers=X-Remote-Group
      - --requestheader-username-headers=X-Remote-User
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      imagePullPolicy: IfNotPresent
      name: prometheus-adapter
      ports:
      - containerPort: 6443
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 40Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp
        name: tmpfs
      - mountPath: /etc/adapter
        name: config
      - mountPath: /etc/prometheus-config
        name: prometheus-adapter-prometheus-config
      - mountPath: /etc/ssl/certs
        name: serving-certs-ca-bundle
      - mountPath: /etc/tls/private
        name: tls
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-cvvtz
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: prometheus-adapter-dockercfg-pqjk2
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: prometheus-adapter
    serviceAccountName: prometheus-adapter
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - emptyDir: {}
      name: tmpfs
    - configMap:
        defaultMode: 420
        name: adapter-config
      name: config
    - configMap:
        defaultMode: 420
        name: prometheus-adapter-prometheus-config
      name: prometheus-adapter-prometheus-config
    - configMap:
        defaultMode: 420
        name: serving-certs-ca-bundle
      name: serving-certs-ca-bundle
    - name: tls
      secret:
        defaultMode: 420
        secretName: prometheus-adapter-5so9dfn4gvaug
    - name: kube-api-access-cvvtz
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:55Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:59Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:59Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:55Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://b5b3c0b7b390149fbdcad12d47890d1ed17958ba4010ceec0e0ec1fb8525387d
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      lastState: {}
      name: prometheus-adapter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-12T16:07:58Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.23.77
    podIPs:
    - ip: 10.128.23.77
    qosClass: Burstable
    startTime: "2022-10-12T16:07:55Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.82"
            ],
            "mac": "fa:16:3e:aa:12:f1",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.82"
            ],
            "mac": "fa:16:3e:aa:12:f1",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-12T16:07:53Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: prometheus-adapter-86cfd468f7-
    labels:
      app.kubernetes.io/component: metrics-adapter
      app.kubernetes.io/name: prometheus-adapter
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 0.9.0
      pod-template-hash: 86cfd468f7
    name: prometheus-adapter-86cfd468f7-qbb4b
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: prometheus-adapter-86cfd468f7
      uid: 23d342f4-13a5-46b1-94b2-e71701e2ca51
    resourceVersion: "478902"
    uid: 5d160ed9-a15a-44c3-b06d-a183f82d6629
  spec:
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/component: metrics-adapter
              app.kubernetes.io/name: prometheus-adapter
              app.kubernetes.io/part-of: openshift-monitoring
          namespaces:
          - openshift-monitoring
          topologyKey: kubernetes.io/hostname
    containers:
    - args:
      - --prometheus-auth-config=/etc/prometheus-config/prometheus-config.yaml
      - --config=/etc/adapter/config.yaml
      - --logtostderr=true
      - --metrics-relist-interval=1m
      - --prometheus-url=https://prometheus-k8s.openshift-monitoring.svc:9091
      - --secure-port=6443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --client-ca-file=/etc/tls/private/client-ca-file
      - --requestheader-client-ca-file=/etc/tls/private/requestheader-client-ca-file
      - --requestheader-allowed-names=kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator
      - --requestheader-extra-headers-prefix=X-Remote-Extra-
      - --requestheader-group-headers=X-Remote-Group
      - --requestheader-username-headers=X-Remote-User
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      imagePullPolicy: IfNotPresent
      name: prometheus-adapter
      ports:
      - containerPort: 6443
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 40Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp
        name: tmpfs
      - mountPath: /etc/adapter
        name: config
      - mountPath: /etc/prometheus-config
        name: prometheus-adapter-prometheus-config
      - mountPath: /etc/ssl/certs
        name: serving-certs-ca-bundle
      - mountPath: /etc/tls/private
        name: tls
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-sjd7t
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: prometheus-adapter-dockercfg-pqjk2
    nodeName: ostest-n5rnf-worker-0-8kq82
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: prometheus-adapter
    serviceAccountName: prometheus-adapter
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - emptyDir: {}
      name: tmpfs
    - configMap:
        defaultMode: 420
        name: adapter-config
      name: config
    - configMap:
        defaultMode: 420
        name: prometheus-adapter-prometheus-config
      name: prometheus-adapter-prometheus-config
    - configMap:
        defaultMode: 420
        name: serving-certs-ca-bundle
      name: serving-certs-ca-bundle
    - name: tls
      secret:
        defaultMode: 420
        secretName: prometheus-adapter-5so9dfn4gvaug
    - name: kube-api-access-sjd7t
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:54Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:57Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:57Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-12T16:07:53Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://75874a5802148d8935e94787143d4a44b49b9e80a30ca396bcabf4c151a3c913
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1
      lastState: {}
      name: prometheus-adapter
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-12T16:07:57Z"
    hostIP: 10.196.2.72
    phase: Running
    podIP: 10.128.23.82
    podIPs:
    - ip: 10.128.23.82
    qosClass: Burstable
    startTime: "2022-10-12T16:07:54Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.18"
            ],
            "mac": "fa:16:3e:ff:39:16",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.18"
            ],
            "mac": "fa:16:3e:ff:39:16",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: prometheus
      openshift.io/scc: nonroot
    creationTimestamp: "2022-10-11T16:46:10Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: prometheus-k8s-
    labels:
      app: prometheus
      app.kubernetes.io/component: prometheus
      app.kubernetes.io/instance: k8s
      app.kubernetes.io/managed-by: prometheus-operator
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 2.29.2
      controller-revision-hash: prometheus-k8s-77f9b66476
      operator.prometheus.io/name: k8s
      operator.prometheus.io/shard: "0"
      prometheus: k8s
      statefulset.kubernetes.io/pod-name: prometheus-k8s-0
    name: prometheus-k8s-0
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: prometheus-k8s
      uid: 0cf40d35-afcd-411c-af5e-48a33a70f1b0
    resourceVersion: "68355"
    uid: 57e33cf7-4412-4bfe-b728-d95159125d5b
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/component: prometheus
                app.kubernetes.io/name: prometheus
                app.kubernetes.io/part-of: openshift-monitoring
                prometheus: k8s
            namespaces:
            - openshift-monitoring
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - --web.console.templates=/etc/prometheus/consoles
      - --web.console.libraries=/etc/prometheus/console_libraries
      - --config.file=/etc/prometheus/config_out/prometheus.env.yaml
      - --storage.tsdb.path=/prometheus
      - --storage.tsdb.retention.time=15d
      - --web.enable-lifecycle
      - --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/
      - --web.route-prefix=/
      - --web.listen-address=127.0.0.1:9090
      - --web.config.file=/etc/prometheus/web_config/web-config.yaml
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      imagePullPolicy: IfNotPresent
      name: prometheus
      readinessProbe:
        exec:
          command:
          - sh
          - -c
          - if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready;
            elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready;
            else exit 1; fi
        failureThreshold: 120
        periodSeconds: 5
        successThreshold: 1
        timeoutSeconds: 3
      resources:
        requests:
          cpu: 70m
          memory: 1Gi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: prometheus-trusted-ca-bundle
        readOnly: true
      - mountPath: /etc/prometheus/config_out
        name: config-out
        readOnly: true
      - mountPath: /etc/prometheus/certs
        name: tls-assets
        readOnly: true
      - mountPath: /prometheus
        name: prometheus-k8s-db
        subPath: prometheus-db
      - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0
        name: prometheus-k8s-rulefiles-0
      - mountPath: /etc/prometheus/web_config/web-config.yaml
        name: web-config
        readOnly: true
        subPath: web-config.yaml
      - mountPath: /etc/prometheus/secrets/kube-etcd-client-certs
        name: secret-kube-etcd-client-certs
        readOnly: true
      - mountPath: /etc/prometheus/secrets/prometheus-k8s-tls
        name: secret-prometheus-k8s-tls
        readOnly: true
      - mountPath: /etc/prometheus/secrets/prometheus-k8s-proxy
        name: secret-prometheus-k8s-proxy
        readOnly: true
      - mountPath: /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls
        name: secret-prometheus-k8s-thanos-sidecar-tls
        readOnly: true
      - mountPath: /etc/prometheus/secrets/kube-rbac-proxy
        name: secret-kube-rbac-proxy
        readOnly: true
      - mountPath: /etc/prometheus/secrets/metrics-client-certs
        name: secret-metrics-client-certs
        readOnly: true
      - mountPath: /etc/prometheus/configmaps/serving-certs-ca-bundle
        name: configmap-serving-certs-ca-bundle
        readOnly: true
      - mountPath: /etc/prometheus/configmaps/kubelet-serving-ca-bundle
        name: configmap-kubelet-serving-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    - args:
      - --listen-address=localhost:8080
      - --reload-url=http://localhost:9090/-/reload
      - --config-file=/etc/prometheus/config/prometheus.yaml.gz
      - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "0"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: config-reloader
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/prometheus/config
        name: config
      - mountPath: /etc/prometheus/config_out
        name: config-out
      - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0
        name: prometheus-k8s-rulefiles-0
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    - args:
      - sidecar
      - --prometheus.url=http://localhost:9090/
      - --tsdb.path=/prometheus
      - --grpc-address=[$(POD_IP)]:10901
      - --http-address=127.0.0.1:10902
      - --grpc-server-tls-cert=/etc/tls/grpc/server.crt
      - --grpc-server-tls-key=/etc/tls/grpc/server.key
      - --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imagePullPolicy: IfNotPresent
      name: thanos-sidecar
      ports:
      - containerPort: 10902
        name: http
        protocol: TCP
      - containerPort: 10901
        name: grpc
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 25Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/grpc
        name: secret-grpc-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9091
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9090
      - -openshift-service-account=prometheus-k8s
      - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      - -htpasswd-file=/etc/proxy/htpasswd/auth
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      name: prometheus-proxy
      ports:
      - containerPort: 9091
        name: web
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-prometheus-k8s-tls
      - mountPath: /etc/proxy/secrets
        name: secret-prometheus-k8s-proxy
      - mountPath: /etc/proxy/htpasswd
        name: secret-prometheus-k8s-htpasswd
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: prometheus-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9095
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --v=10
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-prometheus-k8s-tls
      - mountPath: /etc/kube-rbac-proxy
        name: secret-kube-rbac-proxy
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9095
      - --upstream=http://127.0.0.1:9090
      - --label=namespace
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    - args:
      - --secure-listen-address=[$(POD_IP)]:10902
      - --upstream=http://127.0.0.1:10902
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --allow-paths=/metrics
      - --logtostderr=true
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-thanos
      ports:
      - containerPort: 10902
        name: thanos-proxy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-prometheus-k8s-thanos-sidecar-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostname: prometheus-k8s-0
    imagePullSecrets:
    - name: prometheus-k8s-dockercfg-f5qm8
    initContainers:
    - args:
      - --watch-interval=0
      - --listen-address=:8080
      - --config-file=/etc/prometheus/config/prometheus.yaml.gz
      - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "0"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: init-config-reloader
      resources:
        requests:
          cpu: 100m
          memory: 50Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/prometheus/config
        name: config
      - mountPath: /etc/prometheus/config_out
        name: config-out
      - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0
        name: prometheus-k8s-rulefiles-0
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-gqzck
        readOnly: true
    nodeName: ostest-n5rnf-worker-0-j4pkp
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: prometheus-k8s
    serviceAccountName: prometheus-k8s
    subdomain: prometheus-operated
    terminationGracePeriodSeconds: 600
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: prometheus-k8s-db
      persistentVolumeClaim:
        claimName: prometheus-k8s-db-prometheus-k8s-0
    - name: config
      secret:
        defaultMode: 420
        secretName: prometheus-k8s
    - name: tls-assets
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-tls-assets
    - emptyDir: {}
      name: config-out
    - configMap:
        defaultMode: 420
        name: prometheus-k8s-rulefiles-0
      name: prometheus-k8s-rulefiles-0
    - name: web-config
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-web-config
    - name: secret-kube-etcd-client-certs
      secret:
        defaultMode: 420
        secretName: kube-etcd-client-certs
    - name: secret-prometheus-k8s-tls
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-tls
    - name: secret-prometheus-k8s-proxy
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-proxy
    - name: secret-prometheus-k8s-thanos-sidecar-tls
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-thanos-sidecar-tls
    - name: secret-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: kube-rbac-proxy
    - name: secret-metrics-client-certs
      secret:
        defaultMode: 420
        secretName: metrics-client-certs
    - configMap:
        defaultMode: 420
        name: serving-certs-ca-bundle
      name: configmap-serving-certs-ca-bundle
    - configMap:
        defaultMode: 420
        name: kubelet-serving-ca-bundle
      name: configmap-kubelet-serving-ca-bundle
    - name: secret-prometheus-k8s-htpasswd
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-htpasswd
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: secret-grpc-tls
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-grpc-tls-bg9h55jpjel3o
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: prometheus-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: prometheus-trusted-ca-bundle
    - name: kube-api-access-gqzck
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:26Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:36Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:36Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:11Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://5d3320c71184e1addf19100e9b0e22b9aa5c6f32732e386a5da0abf8ace05f37
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: config-reloader
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:34Z"
    - containerID: cri-o://6c7642e88266e3d3f1c335f7891b27e145643cb20320fde8d209fcdb93853190
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:35Z"
    - containerID: cri-o://cafcf6053fe0a7b3c67ac6efb2b404448140fc54db10fca7d9c1766806ba8b75
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-thanos
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:36Z"
    - containerID: cri-o://6b35ff495a60795a54256be712e5818deaa0be599b3b18b08fd8f1e71bb1ec5d
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:36Z"
    - containerID: cri-o://3a414883c35b3e87c2c09f3b2b8867fcd0df66eee9f93187703e5085f8c10893
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      lastState: {}
      name: prometheus
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:34Z"
    - containerID: cri-o://a6923b8b95f035a65451e210e99b45c952f45b15c804d56f24f7eb1b32e60fba
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: prometheus-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:35Z"
    - containerID: cri-o://f5cb2ce835f8fbed36917a4b3c532c1fcc1637ab0821627a665e3d1f9c366ef1
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      lastState: {}
      name: thanos-sidecar
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:35Z"
    hostIP: 10.196.0.199
    initContainerStatuses:
    - containerID: cri-o://9815cb281e70c2da417d073b1078853225e5b302c85f2121225a9351d61a913a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: init-config-reloader
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://9815cb281e70c2da417d073b1078853225e5b302c85f2121225a9351d61a913a
          exitCode: 0
          finishedAt: "2022-10-11T16:46:25Z"
          reason: Completed
          startedAt: "2022-10-11T16:46:25Z"
    phase: Running
    podIP: 10.128.23.18
    podIPs:
    - ip: 10.128.23.18
    qosClass: Burstable
    startTime: "2022-10-11T16:46:11Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.35"
            ],
            "mac": "fa:16:3e:94:4b:ef",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.35"
            ],
            "mac": "fa:16:3e:94:4b:ef",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: prometheus
      openshift.io/scc: nonroot
    creationTimestamp: "2022-10-11T16:46:10Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: prometheus-k8s-
    labels:
      app: prometheus
      app.kubernetes.io/component: prometheus
      app.kubernetes.io/instance: k8s
      app.kubernetes.io/managed-by: prometheus-operator
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 2.29.2
      controller-revision-hash: prometheus-k8s-77f9b66476
      operator.prometheus.io/name: k8s
      operator.prometheus.io/shard: "0"
      prometheus: k8s
      statefulset.kubernetes.io/pod-name: prometheus-k8s-1
    name: prometheus-k8s-1
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: StatefulSet
      name: prometheus-k8s
      uid: 0cf40d35-afcd-411c-af5e-48a33a70f1b0
    resourceVersion: "68476"
    uid: 50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/component: prometheus
                app.kubernetes.io/name: prometheus
                app.kubernetes.io/part-of: openshift-monitoring
                prometheus: k8s
            namespaces:
            - openshift-monitoring
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - --web.console.templates=/etc/prometheus/consoles
      - --web.console.libraries=/etc/prometheus/console_libraries
      - --config.file=/etc/prometheus/config_out/prometheus.env.yaml
      - --storage.tsdb.path=/prometheus
      - --storage.tsdb.retention.time=15d
      - --web.enable-lifecycle
      - --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/
      - --web.route-prefix=/
      - --web.listen-address=127.0.0.1:9090
      - --web.config.file=/etc/prometheus/web_config/web-config.yaml
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      imagePullPolicy: IfNotPresent
      name: prometheus
      readinessProbe:
        exec:
          command:
          - sh
          - -c
          - if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready;
            elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready;
            else exit 1; fi
        failureThreshold: 120
        periodSeconds: 5
        successThreshold: 1
        timeoutSeconds: 3
      resources:
        requests:
          cpu: 70m
          memory: 1Gi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: prometheus-trusted-ca-bundle
        readOnly: true
      - mountPath: /etc/prometheus/config_out
        name: config-out
        readOnly: true
      - mountPath: /etc/prometheus/certs
        name: tls-assets
        readOnly: true
      - mountPath: /prometheus
        name: prometheus-k8s-db
        subPath: prometheus-db
      - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0
        name: prometheus-k8s-rulefiles-0
      - mountPath: /etc/prometheus/web_config/web-config.yaml
        name: web-config
        readOnly: true
        subPath: web-config.yaml
      - mountPath: /etc/prometheus/secrets/kube-etcd-client-certs
        name: secret-kube-etcd-client-certs
        readOnly: true
      - mountPath: /etc/prometheus/secrets/prometheus-k8s-tls
        name: secret-prometheus-k8s-tls
        readOnly: true
      - mountPath: /etc/prometheus/secrets/prometheus-k8s-proxy
        name: secret-prometheus-k8s-proxy
        readOnly: true
      - mountPath: /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls
        name: secret-prometheus-k8s-thanos-sidecar-tls
        readOnly: true
      - mountPath: /etc/prometheus/secrets/kube-rbac-proxy
        name: secret-kube-rbac-proxy
        readOnly: true
      - mountPath: /etc/prometheus/secrets/metrics-client-certs
        name: secret-metrics-client-certs
        readOnly: true
      - mountPath: /etc/prometheus/configmaps/serving-certs-ca-bundle
        name: configmap-serving-certs-ca-bundle
        readOnly: true
      - mountPath: /etc/prometheus/configmaps/kubelet-serving-ca-bundle
        name: configmap-kubelet-serving-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    - args:
      - --listen-address=localhost:8080
      - --reload-url=http://localhost:9090/-/reload
      - --config-file=/etc/prometheus/config/prometheus.yaml.gz
      - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "0"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: config-reloader
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/prometheus/config
        name: config
      - mountPath: /etc/prometheus/config_out
        name: config-out
      - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0
        name: prometheus-k8s-rulefiles-0
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    - args:
      - sidecar
      - --prometheus.url=http://localhost:9090/
      - --tsdb.path=/prometheus
      - --grpc-address=[$(POD_IP)]:10901
      - --http-address=127.0.0.1:10902
      - --grpc-server-tls-cert=/etc/tls/grpc/server.crt
      - --grpc-server-tls-key=/etc/tls/grpc/server.key
      - --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imagePullPolicy: IfNotPresent
      name: thanos-sidecar
      ports:
      - containerPort: 10902
        name: http
        protocol: TCP
      - containerPort: 10901
        name: grpc
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 25Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/grpc
        name: secret-grpc-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9091
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9090
      - -openshift-service-account=prometheus-k8s
      - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      - -htpasswd-file=/etc/proxy/htpasswd/auth
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      name: prometheus-proxy
      ports:
      - containerPort: 9091
        name: web
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-prometheus-k8s-tls
      - mountPath: /etc/proxy/secrets
        name: secret-prometheus-k8s-proxy
      - mountPath: /etc/proxy/htpasswd
        name: secret-prometheus-k8s-htpasswd
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: prometheus-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9095
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --v=10
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-prometheus-k8s-tls
      - mountPath: /etc/kube-rbac-proxy
        name: secret-kube-rbac-proxy
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9095
      - --upstream=http://127.0.0.1:9090
      - --label=namespace
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    - args:
      - --secure-listen-address=[$(POD_IP)]:10902
      - --upstream=http://127.0.0.1:10902
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --allow-paths=/metrics
      - --logtostderr=true
      - --client-ca-file=/etc/tls/client/client-ca.crt
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-thanos
      ports:
      - containerPort: 10902
        name: thanos-proxy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-prometheus-k8s-thanos-sidecar-tls
      - mountPath: /etc/tls/client
        name: metrics-client-ca
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostname: prometheus-k8s-1
    imagePullSecrets:
    - name: prometheus-k8s-dockercfg-f5qm8
    initContainers:
    - args:
      - --watch-interval=0
      - --listen-address=:8080
      - --config-file=/etc/prometheus/config/prometheus.yaml.gz
      - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
      command:
      - /bin/prometheus-config-reloader
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: SHARD
        value: "0"
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: init-config-reloader
      resources:
        requests:
          cpu: 100m
          memory: 50Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/prometheus/config
        name: config
      - mountPath: /etc/prometheus/config_out
        name: config-out
      - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0
        name: prometheus-k8s-rulefiles-0
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-qqxsv
        readOnly: true
    nodeName: ostest-n5rnf-worker-0-8kq82
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: prometheus-k8s
    serviceAccountName: prometheus-k8s
    subdomain: prometheus-operated
    terminationGracePeriodSeconds: 600
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: prometheus-k8s-db
      persistentVolumeClaim:
        claimName: prometheus-k8s-db-prometheus-k8s-1
    - name: config
      secret:
        defaultMode: 420
        secretName: prometheus-k8s
    - name: tls-assets
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-tls-assets
    - emptyDir: {}
      name: config-out
    - configMap:
        defaultMode: 420
        name: prometheus-k8s-rulefiles-0
      name: prometheus-k8s-rulefiles-0
    - name: web-config
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-web-config
    - name: secret-kube-etcd-client-certs
      secret:
        defaultMode: 420
        secretName: kube-etcd-client-certs
    - name: secret-prometheus-k8s-tls
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-tls
    - name: secret-prometheus-k8s-proxy
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-proxy
    - name: secret-prometheus-k8s-thanos-sidecar-tls
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-thanos-sidecar-tls
    - name: secret-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: kube-rbac-proxy
    - name: secret-metrics-client-certs
      secret:
        defaultMode: 420
        secretName: metrics-client-certs
    - configMap:
        defaultMode: 420
        name: serving-certs-ca-bundle
      name: configmap-serving-certs-ca-bundle
    - configMap:
        defaultMode: 420
        name: kubelet-serving-ca-bundle
      name: configmap-kubelet-serving-ca-bundle
    - name: secret-prometheus-k8s-htpasswd
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-htpasswd
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: secret-grpc-tls
      secret:
        defaultMode: 420
        secretName: prometheus-k8s-grpc-tls-bg9h55jpjel3o
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: prometheus-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: prometheus-trusted-ca-bundle
    - name: kube-api-access-qqxsv
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:31Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:57Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:57Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:46:11Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://8f1de870d2f059356e38367f619aa070b2784584fd75705867ea64fbd0e41e46
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: config-reloader
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:41Z"
    - containerID: cri-o://c375c94f8370593926824bdf14898b7fbabf403375bbedd3f399502fbcf51adc
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:48Z"
    - containerID: cri-o://7780a1ec4a1b9561b06dc659c72b488406246bf2ba470d9e3190e650af070647
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-thanos
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:56Z"
    - containerID: cri-o://1e75a55b09ea279ec7878c3b3fb2dbbcc9771651400c64368240fe20effe7d95
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:56Z"
    - containerID: cri-o://ff98d8a8604e6b4fd133088201e63266e8d65eef437dacd10abd3db0f68df31a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
      lastState: {}
      name: prometheus
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:41Z"
    - containerID: cri-o://7f58ea7cc403c27cdff172c8e8fda71659bd03f3474f139d85f5f707abe55558
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: prometheus-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:48Z"
    - containerID: cri-o://05008e4f94d89864fe153ff8d78f28477f7a39b049faf05bb0f60f6472fc27f2
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      lastState: {}
      name: thanos-sidecar
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:46:48Z"
    hostIP: 10.196.2.72
    initContainerStatuses:
    - containerID: cri-o://2b6bef26018b326930cad08bb9d3b8b0c61609a26327e0b8383a5ffbcca91d4c
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: init-config-reloader
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://2b6bef26018b326930cad08bb9d3b8b0c61609a26327e0b8383a5ffbcca91d4c
          exitCode: 0
          finishedAt: "2022-10-11T16:46:30Z"
          reason: Completed
          startedAt: "2022-10-11T16:46:30Z"
    phase: Running
    podIP: 10.128.23.35
    podIPs:
    - ip: 10.128.23.35
    qosClass: Burstable
    startTime: "2022-10-11T16:46:12Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.177"
            ],
            "mac": "fa:16:3e:1a:10:dc",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.177"
            ],
            "mac": "fa:16:3e:1a:10:dc",
            "default": true,
            "dns": {}
        }]
      kubectl.kubernetes.io/default-container: prometheus-operator
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:14:10Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: prometheus-operator-7bcc4bcc6b-
    labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/name: prometheus-operator
      app.kubernetes.io/part-of: openshift-monitoring
      app.kubernetes.io/version: 0.49.0
      pod-template-hash: 7bcc4bcc6b
    name: prometheus-operator-7bcc4bcc6b-zlbgw
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: prometheus-operator-7bcc4bcc6b
      uid: 254d5a3d-70e9-4382-86c9-e36660822831
    resourceVersion: "6842"
    uid: 4a35c240-ec54-45e3-b1a8-5efe98a87928
  spec:
    containers:
    - args:
      - --kubelet-service=kube-system/kubelet
      - --prometheus-config-reloader=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      - --prometheus-instance-namespaces=openshift-monitoring
      - --thanos-ruler-instance-namespaces=openshift-monitoring
      - --alertmanager-instance-namespaces=openshift-monitoring
      - --config-reloader-cpu-limit=0
      - --config-reloader-memory-limit=0
      - --web.enable-tls=true
      - --web.tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --web.tls-min-version=VersionTLS12
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5
      imagePullPolicy: IfNotPresent
      name: prometheus-operator
      ports:
      - containerPort: 8080
        name: http
        protocol: TCP
      resources:
        requests:
          cpu: 5m
          memory: 150Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: prometheus-operator-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-rx5sv
        readOnly: true
    - args:
      - --logtostderr
      - --secure-listen-address=:8443
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --upstream=https://prometheus-operator.openshift-monitoring.svc:8080/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --client-ca-file=/etc/tls/client/client-ca.crt
      - --upstream-ca-file=/etc/configmaps/operator-cert-ca-bundle/service-ca.crt
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 8443
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: prometheus-operator-tls
      - mountPath: /etc/configmaps/operator-cert-ca-bundle
        name: operator-certs-ca-bundle
      - mountPath: /etc/tls/client
        name: metrics-client-ca
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-rx5sv
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: ostest-n5rnf-master-2
    nodeSelector:
      kubernetes.io/os: linux
      node-role.kubernetes.io/master: ""
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: prometheus-operator
    serviceAccountName: prometheus-operator
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: prometheus-operator-tls
      secret:
        defaultMode: 420
        secretName: prometheus-operator-tls
    - configMap:
        defaultMode: 420
        name: operator-certs-ca-bundle
      name: operator-certs-ca-bundle
    - configMap:
        defaultMode: 420
        name: metrics-client-ca
      name: metrics-client-ca
    - name: kube-api-access-rx5sv
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:10Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:57Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:57Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:14:10Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://016fcc07cea03929733c6cf2f74aa7648f3e3e72666bc6ae0e8ccef82359f4be
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState:
        terminated:
          containerID: cri-o://b43d2ab6d990fc3d6b51170adf95df512a430046b85bea281292d41eb82963b0
          exitCode: 255
          finishedAt: "2022-10-11T16:14:55Z"
          message: "imachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/apimachinery/pkg/util/wait.Until(0xc000390050,
            0x3b9aca00, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
            +0x4d\ncreated by k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171
            +0x245\n\ngoroutine 35 [select]:\nk8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0004c0000,
            0xc000390070, 0xc00009c120, 0x0, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539
            +0xf1\nk8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc000390070,
            0x0, 0x0, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492
            +0xc5\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800,
            0xc000390070, 0x0, 0xb, 0xc000123f20)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511
            +0xb0\ncreated by k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174
            +0x2b3\n\ngoroutine 36 [select]:\nk8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0x0,
            0xc0003900b0, 0x1932f58, 0xc000474000)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279
            +0x87\ncreated by k8s.io/apimachinery/pkg/util/wait.contextForChannel\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278
            +0x8c\n\ngoroutine 37 [select]:\nk8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00009c480,
            0xdf8475800, 0x0, 0xc00009c300)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588
            +0x135\ncreated by k8s.io/apimachinery/pkg/util/wait.poller.func1\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571
            +0x8c\n"
          reason: Error
          startedAt: "2022-10-11T16:14:55Z"
      name: kube-rbac-proxy
      ready: true
      restartCount: 1
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:14:56Z"
    - containerID: cri-o://02fe220c4e55596fecf911246d99d3117df987bfde39598aa58e23feb0aa0fd8
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5
      lastState: {}
      name: prometheus-operator
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:14:49Z"
    hostIP: 10.196.3.187
    phase: Running
    podIP: 10.128.22.177
    podIPs:
    - ip: 10.128.22.177
    qosClass: Burstable
    startTime: "2022-10-11T16:14:10Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.239"
            ],
            "mac": "fa:16:3e:1a:7a:87",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.22.239"
            ],
            "mac": "fa:16:3e:1a:7a:87",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:15:04Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: telemeter-client-6d8969b4bf-
    labels:
      k8s-app: telemeter-client
      pod-template-hash: 6d8969b4bf
    name: telemeter-client-6d8969b4bf-dffrt
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: telemeter-client-6d8969b4bf
      uid: 3001942a-2802-482d-a134-f89d1cf69fb9
    resourceVersion: "61502"
    uid: 4910b4f1-5eb2-45e5-9d80-09f1aed4537c
  spec:
    containers:
    - command:
      - /usr/bin/telemeter-client
      - --id=$(ID)
      - --from=$(FROM)
      - --from-ca-file=/etc/serving-certs-ca-bundle/service-ca.crt
      - --from-token-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - --to=$(TO)
      - --to-token-file=/etc/telemeter/token
      - --listen=localhost:8080
      - --anonymize-salt-file=/etc/telemeter/salt
      - --anonymize-labels=$(ANONYMIZE_LABELS)
      - --match={__name__=~"cluster:usage:.*"}
      - --match={__name__="count:up0"}
      - --match={__name__="count:up1"}
      - --match={__name__="cluster_version"}
      - --match={__name__="cluster_version_available_updates"}
      - --match={__name__="cluster_operator_up"}
      - --match={__name__="cluster_operator_conditions"}
      - --match={__name__="cluster_version_payload"}
      - --match={__name__="cluster_installer"}
      - --match={__name__="cluster_infrastructure_provider"}
      - --match={__name__="cluster_feature_set"}
      - --match={__name__="instance:etcd_object_counts:sum"}
      - --match={__name__="ALERTS",alertstate="firing"}
      - --match={__name__="code:apiserver_request_total:rate:sum"}
      - --match={__name__="cluster:capacity_cpu_cores:sum"}
      - --match={__name__="cluster:capacity_memory_bytes:sum"}
      - --match={__name__="cluster:cpu_usage_cores:sum"}
      - --match={__name__="cluster:memory_usage_bytes:sum"}
      - --match={__name__="openshift:cpu_usage_cores:sum"}
      - --match={__name__="openshift:memory_usage_bytes:sum"}
      - --match={__name__="workload:cpu_usage_cores:sum"}
      - --match={__name__="workload:memory_usage_bytes:sum"}
      - --match={__name__="cluster:virt_platform_nodes:sum"}
      - --match={__name__="cluster:node_instance_type_count:sum"}
      - --match={__name__="cnv:vmi_status_running:count"}
      - --match={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}
      - --match={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}
      - --match={__name__="subscription_sync_total"}
      - --match={__name__="olm_resolution_duration_seconds"}
      - --match={__name__="csv_succeeded"}
      - --match={__name__="csv_abnormal"}
      - --match={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}
      - --match={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}
      - --match={__name__="ceph_cluster_total_bytes"}
      - --match={__name__="ceph_cluster_total_used_raw_bytes"}
      - --match={__name__="ceph_health_status"}
      - --match={__name__="job:ceph_osd_metadata:count"}
      - --match={__name__="job:kube_pv:count"}
      - --match={__name__="job:ceph_pools_iops:total"}
      - --match={__name__="job:ceph_pools_iops_bytes:total"}
      - --match={__name__="job:ceph_versions_running:count"}
      - --match={__name__="job:noobaa_total_unhealthy_buckets:sum"}
      - --match={__name__="job:noobaa_bucket_count:sum"}
      - --match={__name__="job:noobaa_total_object_count:sum"}
      - --match={__name__="noobaa_accounts_num"}
      - --match={__name__="noobaa_total_usage"}
      - --match={__name__="console_url"}
      - --match={__name__="cluster:network_attachment_definition_instances:max"}
      - --match={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}
      - --match={__name__="insightsclient_request_send_total"}
      - --match={__name__="cam_app_workload_migrations"}
      - --match={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}
      - --match={__name__="cluster:alertmanager_integrations:max"}
      - --match={__name__="cluster:telemetry_selected_series:count"}
      - --match={__name__="openshift:prometheus_tsdb_head_series:sum"}
      - --match={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}
      - --match={__name__="monitoring:container_memory_working_set_bytes:sum"}
      - --match={__name__="namespace_job:scrape_series_added:topk3_sum1h"}
      - --match={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}
      - --match={__name__="monitoring:haproxy_server_http_responses_total:sum"}
      - --match={__name__="rhmi_status"}
      - --match={__name__="cluster_legacy_scheduler_policy"}
      - --match={__name__="cluster_master_schedulable"}
      - --match={__name__="che_workspace_status"}
      - --match={__name__="che_workspace_started_total"}
      - --match={__name__="che_workspace_failure_total"}
      - --match={__name__="che_workspace_start_time_seconds_sum"}
      - --match={__name__="che_workspace_start_time_seconds_count"}
      - --match={__name__="cco_credentials_mode"}
      - --match={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}
      - --match={__name__="visual_web_terminal_sessions_total"}
      - --match={__name__="acm_managed_cluster_info"}
      - --match={__name__="cluster:vsphere_vcenter_info:sum"}
      - --match={__name__="cluster:vsphere_esxi_version_total:sum"}
      - --match={__name__="cluster:vsphere_node_hw_version_total:sum"}
      - --match={__name__="openshift:build_by_strategy:sum"}
      - --match={__name__="rhods_aggregate_availability"}
      - --match={__name__="rhods_total_users"}
      - --match={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}
      - --match={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}
      - --match={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}
      - --match={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}
      - --match={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}
      - --match={__name__="jaeger_operator_instances_storage_types"}
      - --match={__name__="jaeger_operator_instances_strategies"}
      - --match={__name__="jaeger_operator_instances_agent_strategies"}
      - --match={__name__="appsvcs:cores_by_product:sum"}
      - --match={__name__="nto_custom_profiles:count"}
      - --limit-bytes=5242880
      env:
      - name: ANONYMIZE_LABELS
      - name: FROM
        value: https://prometheus-k8s.openshift-monitoring.svc:9091
      - name: ID
        value: e65548fc-bd07-47dc-b550-8a4fa01dead9
      - name: TO
        value: https://infogw.api.openshift.com/
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371
      imagePullPolicy: IfNotPresent
      name: telemeter-client
      ports:
      - containerPort: 8080
        name: http
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 40Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/serving-certs-ca-bundle
        name: serving-certs-ca-bundle
      - mountPath: /etc/telemeter
        name: secret-telemeter-client
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: telemeter-trusted-ca-bundle
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ds46w
        readOnly: true
    - args:
      - --reload-url=http://localhost:8080/-/reload
      - --watched-dir=/etc/serving-certs-ca-bundle
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imagePullPolicy: IfNotPresent
      name: reload
      resources:
        requests:
          cpu: 1m
          memory: 10Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/serving-certs-ca-bundle
        name: serving-certs-ca-bundle
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ds46w
        readOnly: true
    - args:
      - --secure-listen-address=:8443
      - --upstream=http://127.0.0.1:8080/
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 8443
        name: https
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/tls/private
        name: telemeter-client-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ds46w
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: telemeter-client
    serviceAccountName: telemeter-client
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - configMap:
        defaultMode: 420
        name: telemeter-client-serving-certs-ca-bundle
      name: serving-certs-ca-bundle
    - name: secret-telemeter-client
      secret:
        defaultMode: 420
        secretName: telemeter-client
    - name: telemeter-client-tls
      secret:
        defaultMode: 420
        secretName: telemeter-client-tls
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: telemeter-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: telemeter-trusted-ca-bundle
    - name: kube-api-access-ds46w
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:49Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:49Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:29:52Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://e49a5cc7978570f2d6c8c603c5dbb15ec57c271cd360efb0636b1e06d70757b2
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:48Z"
    - containerID: cri-o://499f1362b275ac07fcb7ae4e1ee1445b83c5e3d5b5fc85ab29a58c66a1bdba7c
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
      lastState: {}
      name: reload
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:48Z"
    - containerID: cri-o://111972f6103805475ef9e6d819a3e32bb4ec63154f6b25c5049a1e7a1667db81
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371
      lastState: {}
      name: telemeter-client
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:36Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.22.239
    podIPs:
    - ip: 10.128.22.239
    qosClass: Burstable
    startTime: "2022-10-11T16:29:52Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.183"
            ],
            "mac": "fa:16:3e:c3:a9:de",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.183"
            ],
            "mac": "fa:16:3e:c3:a9:de",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:30:12Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: thanos-querier-6699db6d95-
    labels:
      app.kubernetes.io/component: query-layer
      app.kubernetes.io/instance: thanos-querier
      app.kubernetes.io/name: thanos-query
      app.kubernetes.io/version: 0.22.0
      pod-template-hash: 6699db6d95
    name: thanos-querier-6699db6d95-42mpw
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: thanos-querier-6699db6d95
      uid: 3dc07169-b785-4638-bae8-477acf441d9f
    resourceVersion: "61844"
    uid: 6987d5e8-4a23-49ad-ab57-6240ef3c4bd7
  spec:
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/component: query-layer
              app.kubernetes.io/instance: thanos-querier
              app.kubernetes.io/name: thanos-query
          topologyKey: kubernetes.io/hostname
    containers:
    - args:
      - query
      - --grpc-address=127.0.0.1:10901
      - --http-address=127.0.0.1:9090
      - --log.format=logfmt
      - --query.replica-label=prometheus_replica
      - --query.replica-label=thanos_ruler_replica
      - --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local
      - --query.auto-downsampling
      - --store.sd-dns-resolver=miekgdns
      - --grpc-client-tls-secure
      - --grpc-client-tls-cert=/etc/tls/grpc/client.crt
      - --grpc-client-tls-key=/etc/tls/grpc/client.key
      - --grpc-client-tls-ca=/etc/tls/grpc/ca.crt
      - --grpc-client-server-name=prometheus-grpc
      - --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local
      - --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local
      env:
      - name: HOST_IP_ADDRESS
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.hostIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imagePullPolicy: IfNotPresent
      name: thanos-query
      ports:
      - containerPort: 9090
        name: http
        protocol: TCP
      resources:
        requests:
          cpu: 10m
          memory: 12Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/grpc
        name: secret-grpc-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ml55t
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9091
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9090
      - -openshift-service-account=thanos-querier
      - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      - -bypass-auth-for=^/-/(healthy|ready)$
      - -htpasswd-file=/etc/proxy/htpasswd/auth
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 4
        httpGet:
          path: /-/healthy
          port: 9091
          scheme: HTTPS
        initialDelaySeconds: 5
        periodSeconds: 30
        successThreshold: 1
        timeoutSeconds: 1
      name: oauth-proxy
      ports:
      - containerPort: 9091
        name: web
        protocol: TCP
      readinessProbe:
        failureThreshold: 20
        httpGet:
          path: /-/ready
          port: 9091
          scheme: HTTPS
        initialDelaySeconds: 5
        periodSeconds: 5
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-thanos-querier-tls
      - mountPath: /etc/proxy/secrets
        name: secret-thanos-querier-oauth-cookie
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: thanos-querier-trusted-ca-bundle
        readOnly: true
      - mountPath: /etc/proxy/htpasswd
        name: secret-thanos-querier-oauth-htpasswd
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ml55t
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9095
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --allow-paths=/api/v1/query,/api/v1/query_range,/api/v1/labels,/api/v1/label/*/values,/api/v1/series
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-thanos-querier-tls
      - mountPath: /etc/kube-rbac-proxy
        name: secret-thanos-querier-kube-rbac-proxy
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ml55t
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9095
      - --upstream=http://127.0.0.1:9090
      - --label=namespace
      - --enable-label-apis
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ml55t
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9093
      - --upstream=http://127.0.0.1:9095
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --allow-paths=/api/v1/rules
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-rules
      ports:
      - containerPort: 9093
        name: tenancy-rules
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-thanos-querier-tls
      - mountPath: /etc/kube-rbac-proxy
        name: secret-thanos-querier-kube-rbac-proxy-rules
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ml55t
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: thanos-querier-dockercfg-pphnw
    nodeName: ostest-n5rnf-worker-0-j4pkp
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: thanos-querier
    serviceAccountName: thanos-querier
    terminationGracePeriodSeconds: 120
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: secret-thanos-querier-tls
      secret:
        defaultMode: 420
        secretName: thanos-querier-tls
    - name: secret-thanos-querier-oauth-cookie
      secret:
        defaultMode: 420
        secretName: thanos-querier-oauth-cookie
    - name: secret-thanos-querier-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: thanos-querier-kube-rbac-proxy
    - name: secret-thanos-querier-kube-rbac-proxy-rules
      secret:
        defaultMode: 420
        secretName: thanos-querier-kube-rbac-proxy-rules
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: thanos-querier-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: thanos-querier-trusted-ca-bundle
    - name: secret-thanos-querier-oauth-htpasswd
      secret:
        defaultMode: 420
        secretName: thanos-querier-oauth-htpasswd
    - name: secret-grpc-tls
      secret:
        defaultMode: 420
        secretName: thanos-querier-grpc-tls-ejqjssqja76hi
    - name: kube-api-access-ml55t
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:32Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:09Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:09Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:32Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://3de35991ef5607ba09fd496e85cb6d709d8ee3a8d51efe3ef8b013d5d0cfd1ba
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:00Z"
    - containerID: cri-o://8b3ab57752f962e1d3b299ee3c96f502b63018a733766b19ab9d926ae741e562
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-rules
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:07Z"
    - containerID: cri-o://73f6483090ebae1503fd394766af8a4d84cdcd65fd046367846e3bc1b3c3ff81
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: oauth-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:00Z"
    - containerID: cri-o://6f61c6c082a310415eac3f33fa30b330e4940e82ae1cc7e149ab73c564f4a562
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:07Z"
    - containerID: cri-o://afc3af17ece11b17afc10a01856931a8672c7433642b2b192199a103256b621d
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      lastState: {}
      name: thanos-query
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:30:59Z"
    hostIP: 10.196.0.199
    phase: Running
    podIP: 10.128.23.183
    podIPs:
    - ip: 10.128.23.183
    qosClass: Burstable
    startTime: "2022-10-11T16:30:32Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.114"
            ],
            "mac": "fa:16:3e:64:00:9b",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.23.114"
            ],
            "mac": "fa:16:3e:64:00:9b",
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-11T16:30:12Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: thanos-querier-6699db6d95-
    labels:
      app.kubernetes.io/component: query-layer
      app.kubernetes.io/instance: thanos-querier
      app.kubernetes.io/name: thanos-query
      app.kubernetes.io/version: 0.22.0
      pod-template-hash: 6699db6d95
    name: thanos-querier-6699db6d95-cvbzq
    namespace: openshift-monitoring
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: thanos-querier-6699db6d95
      uid: 3dc07169-b785-4638-bae8-477acf441d9f
    resourceVersion: "62472"
    uid: 95c88db1-e599-4351-8604-3655d9250791
  spec:
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/component: query-layer
              app.kubernetes.io/instance: thanos-querier
              app.kubernetes.io/name: thanos-query
          topologyKey: kubernetes.io/hostname
    containers:
    - args:
      - query
      - --grpc-address=127.0.0.1:10901
      - --http-address=127.0.0.1:9090
      - --log.format=logfmt
      - --query.replica-label=prometheus_replica
      - --query.replica-label=thanos_ruler_replica
      - --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local
      - --query.auto-downsampling
      - --store.sd-dns-resolver=miekgdns
      - --grpc-client-tls-secure
      - --grpc-client-tls-cert=/etc/tls/grpc/client.crt
      - --grpc-client-tls-key=/etc/tls/grpc/client.key
      - --grpc-client-tls-ca=/etc/tls/grpc/ca.crt
      - --grpc-client-server-name=prometheus-grpc
      - --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local
      - --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local
      env:
      - name: HOST_IP_ADDRESS
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.hostIP
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imagePullPolicy: IfNotPresent
      name: thanos-query
      ports:
      - containerPort: 9090
        name: http
        protocol: TCP
      resources:
        requests:
          cpu: 10m
          memory: 12Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/grpc
        name: secret-grpc-tls
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ddjdg
        readOnly: true
    - args:
      - -provider=openshift
      - -https-address=:9091
      - -http-address=
      - -email-domain=*
      - -upstream=http://localhost:9090
      - -openshift-service-account=thanos-querier
      - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
      - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
      - -tls-cert=/etc/tls/private/tls.crt
      - -tls-key=/etc/tls/private/tls.key
      - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      - -cookie-secret-file=/etc/proxy/secrets/session_secret
      - -openshift-ca=/etc/pki/tls/cert.pem
      - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      - -bypass-auth-for=^/-/(healthy|ready)$
      - -htpasswd-file=/etc/proxy/htpasswd/auth
      env:
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 4
        httpGet:
          path: /-/healthy
          port: 9091
          scheme: HTTPS
        initialDelaySeconds: 5
        periodSeconds: 30
        successThreshold: 1
        timeoutSeconds: 1
      name: oauth-proxy
      ports:
      - containerPort: 9091
        name: web
        protocol: TCP
      readinessProbe:
        failureThreshold: 20
        httpGet:
          path: /-/ready
          port: 9091
          scheme: HTTPS
        initialDelaySeconds: 5
        periodSeconds: 5
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        requests:
          cpu: 1m
          memory: 20Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-thanos-querier-tls
      - mountPath: /etc/proxy/secrets
        name: secret-thanos-querier-oauth-cookie
      - mountPath: /etc/pki/ca-trust/extracted/pem/
        name: thanos-querier-trusted-ca-bundle
        readOnly: true
      - mountPath: /etc/proxy/htpasswd
        name: secret-thanos-querier-oauth-htpasswd
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ddjdg
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9092
      - --upstream=http://127.0.0.1:9095
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --allow-paths=/api/v1/query,/api/v1/query_range,/api/v1/labels,/api/v1/label/*/values,/api/v1/series
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy
      ports:
      - containerPort: 9092
        name: tenancy
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-thanos-querier-tls
      - mountPath: /etc/kube-rbac-proxy
        name: secret-thanos-querier-kube-rbac-proxy
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ddjdg
        readOnly: true
    - args:
      - --insecure-listen-address=127.0.0.1:9095
      - --upstream=http://127.0.0.1:9090
      - --label=namespace
      - --enable-label-apis
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imagePullPolicy: IfNotPresent
      name: prom-label-proxy
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ddjdg
        readOnly: true
    - args:
      - --secure-listen-address=0.0.0.0:9093
      - --upstream=http://127.0.0.1:9095
      - --config-file=/etc/kube-rbac-proxy/config.yaml
      - --tls-cert-file=/etc/tls/private/tls.crt
      - --tls-private-key-file=/etc/tls/private/tls.key
      - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - --logtostderr=true
      - --allow-paths=/api/v1/rules
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imagePullPolicy: IfNotPresent
      name: kube-rbac-proxy-rules
      ports:
      - containerPort: 9093
        name: tenancy-rules
        protocol: TCP
      resources:
        requests:
          cpu: 1m
          memory: 15Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000420000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /etc/tls/private
        name: secret-thanos-querier-tls
      - mountPath: /etc/kube-rbac-proxy
        name: secret-thanos-querier-kube-rbac-proxy-rules
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-ddjdg
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: thanos-querier-dockercfg-pphnw
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000420000
      seLinuxOptions:
        level: s0:c21,c0
    serviceAccount: thanos-querier
    serviceAccountName: thanos-querier
    terminationGracePeriodSeconds: 120
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: secret-thanos-querier-tls
      secret:
        defaultMode: 420
        secretName: thanos-querier-tls
    - name: secret-thanos-querier-oauth-cookie
      secret:
        defaultMode: 420
        secretName: thanos-querier-oauth-cookie
    - name: secret-thanos-querier-kube-rbac-proxy
      secret:
        defaultMode: 420
        secretName: thanos-querier-kube-rbac-proxy
    - name: secret-thanos-querier-kube-rbac-proxy-rules
      secret:
        defaultMode: 420
        secretName: thanos-querier-kube-rbac-proxy-rules
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: thanos-querier-trusted-ca-bundle-2rsonso43rc5p
        optional: true
      name: thanos-querier-trusted-ca-bundle
    - name: secret-thanos-querier-oauth-htpasswd
      secret:
        defaultMode: 420
        secretName: thanos-querier-oauth-htpasswd
    - name: secret-grpc-tls
      secret:
        defaultMode: 420
        secretName: thanos-querier-grpc-tls-ejqjssqja76hi
    - name: kube-api-access-ddjdg
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:12Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:42Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:31:42Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-11T16:30:12Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://3925db2db4625ef59e27c39d662e21a6d627ffd9cc4d5cb107c5cfeb349d5125
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:34Z"
    - containerID: cri-o://29b84274309e904f9231b9f6071bd5646a0c3f7014fac86a0301d192a88f2d36
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
      lastState: {}
      name: kube-rbac-proxy-rules
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:34Z"
    - containerID: cri-o://f56b1a3f2be3fa5f1619c84fc4fd6f2e761621164b9451155438257a292baa6d
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
      lastState: {}
      name: oauth-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:33Z"
    - containerID: cri-o://b8e23910be357b9098e9870d53c3713a33e7dc7e57b282be451ef21488353f4b
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
      lastState: {}
      name: prom-label-proxy
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:34Z"
    - containerID: cri-o://5455dbf6532b3af64140857906aacfa67bec8f76d5290eb73f737b4180a38a1a
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
      lastState: {}
      name: thanos-query
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-11T16:31:33Z"
    hostIP: 10.196.2.169
    phase: Running
    podIP: 10.128.23.114
    podIPs:
    - ip: 10.128.23.114
    qosClass: Burstable
    startTime: "2022-10-11T16:30:12Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
Oct 13 10:20:16.688: INFO: Running 'oc --kubeconfig=.kube/config describe pod/prometheus-k8s-0 -n openshift-monitoring'
Oct 13 10:20:16.867: INFO: Describing pod "prometheus-k8s-0"
Name:                 prometheus-k8s-0
Namespace:            openshift-monitoring
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 ostest-n5rnf-worker-0-j4pkp/10.196.0.199
Start Time:           Tue, 11 Oct 2022 16:46:11 +0000
Labels:               app=prometheus
                      app.kubernetes.io/component=prometheus
                      app.kubernetes.io/instance=k8s
                      app.kubernetes.io/managed-by=prometheus-operator
                      app.kubernetes.io/name=prometheus
                      app.kubernetes.io/part-of=openshift-monitoring
                      app.kubernetes.io/version=2.29.2
                      controller-revision-hash=prometheus-k8s-77f9b66476
                      operator.prometheus.io/name=k8s
                      operator.prometheus.io/shard=0
                      prometheus=k8s
                      statefulset.kubernetes.io/pod-name=prometheus-k8s-0
Annotations:          k8s.v1.cni.cncf.io/network-status:
                        [{
                            "name": "kuryr",
                            "interface": "eth0",
                            "ips": [
                                "10.128.23.18"
                            ],
                            "mac": "fa:16:3e:ff:39:16",
                            "default": true,
                            "dns": {}
                        }]
                      k8s.v1.cni.cncf.io/networks-status:
                        [{
                            "name": "kuryr",
                            "interface": "eth0",
                            "ips": [
                                "10.128.23.18"
                            ],
                            "mac": "fa:16:3e:ff:39:16",
                            "default": true,
                            "dns": {}
                        }]
                      kubectl.kubernetes.io/default-container: prometheus
                      openshift.io/scc: nonroot
Status:               Running
IP:                   10.128.23.18
IPs:
  IP:           10.128.23.18
Controlled By:  StatefulSet/prometheus-k8s
Init Containers:
  init-config-reloader:
    Container ID:  cri-o://9815cb281e70c2da417d073b1078853225e5b302c85f2121225a9351d61a913a
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/prometheus-config-reloader
    Args:
      --watch-interval=0
      --listen-address=:8080
      --config-file=/etc/prometheus/config/prometheus.yaml.gz
      --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 11 Oct 2022 16:46:25 +0000
      Finished:     Tue, 11 Oct 2022 16:46:25 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:  prometheus-k8s-0 (v1:metadata.name)
      SHARD:     0
    Mounts:
      /etc/prometheus/config from config (rw)
      /etc/prometheus/config_out from config-out (rw)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
Containers:
  prometheus:
    Container ID:  cri-o://3a414883c35b3e87c2c09f3b2b8867fcd0df66eee9f93187703e5085f8c10893
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
    Port:          <none>
    Host Port:     <none>
    Args:
      --web.console.templates=/etc/prometheus/consoles
      --web.console.libraries=/etc/prometheus/console_libraries
      --config.file=/etc/prometheus/config_out/prometheus.env.yaml
      --storage.tsdb.path=/prometheus
      --storage.tsdb.retention.time=15d
      --web.enable-lifecycle
      --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/
      --web.route-prefix=/
      --web.listen-address=127.0.0.1:9090
      --web.config.file=/etc/prometheus/web_config/web-config.yaml
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:34 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        70m
      memory:     1Gi
    Readiness:    exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=120
    Environment:  <none>
    Mounts:
      /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro)
      /etc/prometheus/certs from tls-assets (ro)
      /etc/prometheus/config_out from config-out (ro)
      /etc/prometheus/configmaps/kubelet-serving-ca-bundle from configmap-kubelet-serving-ca-bundle (ro)
      /etc/prometheus/configmaps/serving-certs-ca-bundle from configmap-serving-certs-ca-bundle (ro)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /etc/prometheus/secrets/kube-etcd-client-certs from secret-kube-etcd-client-certs (ro)
      /etc/prometheus/secrets/kube-rbac-proxy from secret-kube-rbac-proxy (ro)
      /etc/prometheus/secrets/metrics-client-certs from secret-metrics-client-certs (ro)
      /etc/prometheus/secrets/prometheus-k8s-proxy from secret-prometheus-k8s-proxy (ro)
      /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls from secret-prometheus-k8s-thanos-sidecar-tls (ro)
      /etc/prometheus/secrets/prometheus-k8s-tls from secret-prometheus-k8s-tls (ro)
      /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml")
      /prometheus from prometheus-k8s-db (rw,path="prometheus-db")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
  config-reloader:
    Container ID:  cri-o://5d3320c71184e1addf19100e9b0e22b9aa5c6f32732e386a5da0abf8ace05f37
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/prometheus-config-reloader
    Args:
      --listen-address=localhost:8080
      --reload-url=http://localhost:9090/-/reload
      --config-file=/etc/prometheus/config/prometheus.yaml.gz
      --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:34 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  10Mi
    Environment:
      POD_NAME:  prometheus-k8s-0 (v1:metadata.name)
      SHARD:     0
    Mounts:
      /etc/prometheus/config from config (rw)
      /etc/prometheus/config_out from config-out (rw)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
  thanos-sidecar:
    Container ID:  cri-o://f5cb2ce835f8fbed36917a4b3c532c1fcc1637ab0821627a665e3d1f9c366ef1
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
    Ports:         10902/TCP, 10901/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      sidecar
      --prometheus.url=http://localhost:9090/
      --tsdb.path=/prometheus
      --grpc-address=[$(POD_IP)]:10901
      --http-address=127.0.0.1:10902
      --grpc-server-tls-cert=/etc/tls/grpc/server.crt
      --grpc-server-tls-key=/etc/tls/grpc/server.key
      --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:35 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  25Mi
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /etc/tls/grpc from secret-grpc-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
  prometheus-proxy:
    Container ID:  cri-o://a6923b8b95f035a65451e210e99b45c952f45b15c804d56f24f7eb1b32e60fba
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
    Port:          9091/TCP
    Host Port:     0/TCP
    Args:
      -provider=openshift
      -https-address=:9091
      -http-address=
      -email-domain=*
      -upstream=http://localhost:9090
      -openshift-service-account=prometheus-k8s
      -openshift-sar={"resource": "namespaces", "verb": "get"}
      -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}
      -tls-cert=/etc/tls/private/tls.crt
      -tls-key=/etc/tls/private/tls.key
      -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      -cookie-secret-file=/etc/proxy/secrets/session_secret
      -openshift-ca=/etc/pki/tls/cert.pem
      -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      -htpasswd-file=/etc/proxy/htpasswd/auth
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:35 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  20Mi
    Environment:
      HTTP_PROXY:   
      HTTPS_PROXY:  
      NO_PROXY:     
    Mounts:
      /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro)
      /etc/proxy/htpasswd from secret-prometheus-k8s-htpasswd (rw)
      /etc/proxy/secrets from secret-prometheus-k8s-proxy (rw)
      /etc/tls/private from secret-prometheus-k8s-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
  kube-rbac-proxy:
    Container ID:  cri-o://6c7642e88266e3d3f1c335f7891b27e145643cb20320fde8d209fcdb93853190
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Port:          9092/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:9092
      --upstream=http://127.0.0.1:9095
      --config-file=/etc/kube-rbac-proxy/config.yaml
      --tls-cert-file=/etc/tls/private/tls.crt
      --tls-private-key-file=/etc/tls/private/tls.key
      --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:35 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        1m
      memory:     15Mi
    Environment:  <none>
    Mounts:
      /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw)
      /etc/tls/private from secret-prometheus-k8s-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
  prom-label-proxy:
    Container ID:  cri-o://6b35ff495a60795a54256be712e5818deaa0be599b3b18b08fd8f1e71bb1ec5d
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
    Port:          <none>
    Host Port:     <none>
    Args:
      --insecure-listen-address=127.0.0.1:9095
      --upstream=http://127.0.0.1:9090
      --label=namespace
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:36 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        1m
      memory:     15Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
  kube-rbac-proxy-thanos:
    Container ID:  cri-o://cafcf6053fe0a7b3c67ac6efb2b404448140fc54db10fca7d9c1766806ba8b75
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Port:          10902/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=[$(POD_IP)]:10902
      --upstream=http://127.0.0.1:10902
      --tls-cert-file=/etc/tls/private/tls.crt
      --tls-private-key-file=/etc/tls/private/tls.key
      --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      --allow-paths=/metrics
      --logtostderr=true
      --client-ca-file=/etc/tls/client/client-ca.crt
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:36 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  10Mi
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /etc/tls/client from metrics-client-ca (ro)
      /etc/tls/private from secret-prometheus-k8s-thanos-sidecar-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  prometheus-k8s-db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus-k8s-db-prometheus-k8s-0
    ReadOnly:   false
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s
    Optional:    false
  tls-assets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-tls-assets
    Optional:    false
  config-out:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  prometheus-k8s-rulefiles-0:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-k8s-rulefiles-0
    Optional:  false
  web-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-web-config
    Optional:    false
  secret-kube-etcd-client-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-etcd-client-certs
    Optional:    false
  secret-prometheus-k8s-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-tls
    Optional:    false
  secret-prometheus-k8s-proxy:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-proxy
    Optional:    false
  secret-prometheus-k8s-thanos-sidecar-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-thanos-sidecar-tls
    Optional:    false
  secret-kube-rbac-proxy:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-rbac-proxy
    Optional:    false
  secret-metrics-client-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  metrics-client-certs
    Optional:    false
  configmap-serving-certs-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      serving-certs-ca-bundle
    Optional:  false
  configmap-kubelet-serving-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kubelet-serving-ca-bundle
    Optional:  false
  secret-prometheus-k8s-htpasswd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-htpasswd
    Optional:    false
  metrics-client-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      metrics-client-ca
    Optional:  false
  secret-grpc-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-grpc-tls-bg9h55jpjel3o
    Optional:    false
  prometheus-trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-trusted-ca-bundle-2rsonso43rc5p
    Optional:  true
  kube-api-access-gqzck:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>


Oct 13 10:20:16.867: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c init-config-reloader -n openshift-monitoring'
Oct 13 10:20:17.069: INFO: Log for pod "prometheus-k8s-0"/"init-config-reloader"
---->
level=info ts=2022-10-11T16:46:25.301002319Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=fc23b05)"
level=info ts=2022-10-11T16:46:25.301078043Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221006-18:49:18)"
<----end of log for "prometheus-k8s-0"/"init-config-reloader"

Oct 13 10:20:17.069: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c prometheus -n openshift-monitoring'
Oct 13 10:20:18.784: INFO: Log for pod "prometheus-k8s-0"/"prometheus"
---->
level=error ts=2022-10-13T08:53:36.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:36.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:38.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:40.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:40.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:42.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:42.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:42.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:43.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:44.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:44.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:45.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:46.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:46.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:47.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:47.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:48.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:50.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:50.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:50.493Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:51.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:52.181Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88SKDNVAA0H9PCXSK9HB24.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T08:53:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:52.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:56.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:57.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:53:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:57.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:58.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:58.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:53:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:00.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:00.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:01.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:04.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:06.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:06.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:07.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:08.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:10.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:10.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:11.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:12.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:12.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:13.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:14.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:14.296Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:14.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:17.265Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:17.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:18.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.693Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.875Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.883Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:19.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:20.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:20.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:21.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:22.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:22.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:24.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:24.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:26.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:27.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:28.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:28.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:28.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:28.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:28.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:29.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:29.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:29.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:29.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:30.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:30.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:30.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:34.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:35.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:35.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:36.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:36.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:40.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:42.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:42.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:43.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:44.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:44.299Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:47.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:48.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.785Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:49.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:50.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:52.182Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88VE0P6A15D0F7AARVRKCZ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T08:54:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:52.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:53.243Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:56.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:54:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:57.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:58.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:58.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:59.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:54:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:00.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:04.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:05.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:06.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:06.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:06.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:07.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:10.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:10.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:12.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:12.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:14.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:14.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:14.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:16.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:17.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:18.265Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:18.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.830Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:19.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:20.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:20.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:21.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:22.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:23.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:26.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:26.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:27.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:27.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:28.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:28.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:29.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:29.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:30.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:30.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:31.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:32.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:34.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:35.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:36.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:36.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:36.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.161Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.350Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:47.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.770Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.945Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:50.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:50.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:50.485Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:52.183Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88X8KP5XDYAWGB07SMBJ94.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T08:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:52.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:17.399Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.922Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:20.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:20.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:20.578Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:22.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:47.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:50.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:50.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:50.652Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.183Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88Z36QGQ20DZWX8TNM2WBW.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T08:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.185Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.466Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.466Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.543Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:17.459Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.895Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:20.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:20.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:20.537Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:22.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:22.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.306Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:47.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.665Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.829Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:50.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:52.184Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF890XSR3NVQ36V7GQJ9G2EH.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T08:57:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:17.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.714Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.882Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.891Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:20.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:22.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:22.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:23.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.151Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.330Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:47.300Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.891Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:50.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:52.185Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF892RCSVWPXGF6QFT5JZB4Z.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T08:58:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.174Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:17.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.584Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.744Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.750Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:20.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:22.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:22.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.161Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:47.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.813Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:50.375Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.186Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF894JZTWVBECSW7HWXTEZJ7.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T08:59:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.492Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.492Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.493Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.493Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.493Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.171Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.328Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:17.318Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.910Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:20.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:20.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:20.522Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:22.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:22.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.402Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:47.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:50.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:50.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:50.549Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:52.187Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF896DJV5EKEQ94KN4DEB2QP.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:00:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:53.245Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:17.523Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:20.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:20.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:20.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:20.732Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:22.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:22.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:23.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.142Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:47.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.589Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.781Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.789Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:50.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:52.189Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89885XM369MDWN23YT72AD.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:01:52.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:17.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.639Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.802Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:20.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:22.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:22.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.453Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.572Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:47.407Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.738Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.932Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:50.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:52.190Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89A2RY3PJQ6DF73MW28F0H.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:02:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:52.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.087Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:17.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.732Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.898Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.905Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:20.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:22.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.535Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:47.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.927Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.933Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:50.385Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.191Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89BXBZ30M5WZHTJ1NB51SD.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.202Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:17.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.758Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:20.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:22.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:23.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.416Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:47.352Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.751Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.936Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:50.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.192Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89DQZ061CFFYP7PAVQQ4N1.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.276Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.471Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.472Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.549Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:17.572Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:20.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:20.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:20.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:20.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:22.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:22.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:47.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.752Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.908Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.915Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:50.311Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:52.193Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89FJJ102PQFY22KYFRT94W.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:05:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:17.002Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:17.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:20.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:20.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:20.413Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:22.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:22.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:47.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.698Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:50.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.194Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89HD51D3WCB7ZGX74J2VTV.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:06:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.106Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:17.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.668Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.839Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.847Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:20.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:22.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:22.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:23.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:47.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.768Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:50.533Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:52.195Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89K7R3MB2YWJ7H69Z1BA3Q.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:07:52.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:52.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:04.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:04.303Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:17.311Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.895Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:20.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:22.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:47.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.627Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.805Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.812Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:50.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:52.196Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89N2B4507E3CGY688Q5N8Y.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:08:52.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:52.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:17.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:18.269Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.755Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.917Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.923Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:20.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:30.589Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.332Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:47.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.652Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.813Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.821Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:50.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:50.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.197Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89PWY5GVP21KBFFYZJKARR.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:17.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:20.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:20.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:20.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.016Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:22.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.642Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.642Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.643Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:22.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:23.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:23.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.452Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:47.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.735Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.908Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:50.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:52.198Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89RQH60C07V5YWGHGB0JBC.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:10:52.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.424Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:17.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.935Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:20.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:20.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:20.539Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:22.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:22.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.252Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.360Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.473Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:47.304Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:50.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:50.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:50.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:50.625Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:52.199Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89TJ474JNSBTF8SZB61DN9.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:11:52.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:17.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.675Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.867Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.877Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:20.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:22.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.272Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:47.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:50.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:50.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:50.543Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.200Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89WCQ8W8E2VHBAF3ETTBM2.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:12:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.296Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.405Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:17.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.813Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:22.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:22.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:23.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.233Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.328Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.429Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.087Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:47.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.648Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.822Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.830Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.942Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:50.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:52.201Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89Y7A8BDF7JKN7588ASCXB.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:13:52.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:53.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.200Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:17.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.826Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:20.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:20.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:20.633Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:22.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:22.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.307Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.409Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:47.414Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:48.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:50.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:50.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:50.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:52.201Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A01X95W2VE1B81V0SS7WX.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:14:52.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.352Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.468Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.469Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:17.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.634Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.801Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:20.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:22.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:47.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.853Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:50.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:52.202Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A1WGAJEVAJ12SMB1HYBXK.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:15:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:52.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:17.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.684Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:20.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:22.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:22.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:23.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.160Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:47.326Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.749Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:50.327Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:52.203Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A3Q3BFPC6BB2MSQ7R687K.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:16:52.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:52.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.729Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.463Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:17.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.665Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.824Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.832Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:20.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:22.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:22.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.456Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:47.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.643Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.815Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:50.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:52.204Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A5HPB0NXV76Y81MNW293G.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:17:52.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.193Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:17.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.679Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.862Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:22.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:22.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.202Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.400Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.465Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:47.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.519Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.523Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.524Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.526Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.834Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.947Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:50.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:50.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:50.538Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.204Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A7C9CRVTV03AMKE9HM2M7.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:18:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:17.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.654Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.817Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.825Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:20.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:22.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:22.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:47.252Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.797Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:50.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:52.205Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A96WD3VA8KK51V5HFAHT4.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:52.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.415Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:17.359Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:20.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:20.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:20.556Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.664Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.430Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:47.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.817Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:50.428Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:52.206Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AB1FEMVKJS2VNHACSQQRN.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:20:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:53.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:17.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:20.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:20.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:20.462Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:22.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.335Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:47.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:50.467Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.207Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ACW2F8PSXAW38YC33MZQV.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:21:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:17.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.718Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.889Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:22.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:22.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.284Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:47.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.910Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:50.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:50.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:50.515Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:52.208Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AEPNGXJRX9FDT16AH7YMY.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:22:52.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.351Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.452Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:17.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.790Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:20.357Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:22.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.492Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:47.286Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.775Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.930Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.938Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:50.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:52.209Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AGH8H7N0NZY9P81HPJ4TE.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:23:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:04.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:17.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.769Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.926Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:20.322Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:22.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:22.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:47.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:50.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:50.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:50.460Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:52.211Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AJBVJR57HSS3MN3E8XZJJ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:24:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:52.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:56.007Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.396Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:17.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.773Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.933Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:20.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:22.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.751Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.363Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.467Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:47.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.689Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.851Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:50.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:52.212Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AM6EM9VNZD3D8E4NREAM7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:25:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:52.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.494Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.495Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.495Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.496Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.496Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.264Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.526Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:17.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.703Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.868Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.878Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:20.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:22.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.987Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.303Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.406Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:47.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.714Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.928Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:50.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.213Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AP11NNYWC67QDAE4CEY4S.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:26:52.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.446Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:17.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.698Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.893Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.901Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:20.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:22.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.052Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.552Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.553Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.553Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.554Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.554Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.555Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.392Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.516Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:47.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.676Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.836Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:50.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:52.214Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AQVMP4ZQMPS2JDJ2JV0EM.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:27:52.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:52.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:17.156Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.650Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.658Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:20.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:22.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:23.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.390Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.514Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:47.398Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:50.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:52.215Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ASP7QAPZRP62881BMVE38.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:28:52.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.476Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:17.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.889Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:20.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:20.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:20.594Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:22.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.457Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:47.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.775Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:50.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:50.468Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:52.216Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AVGTR7ANW50JD02BYG6R9.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:29:52.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:52.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.414Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:17.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.515Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.517Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.518Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:20.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:20.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:20.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:20.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:20.809Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:22.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.139Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.477Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:47.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.591Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.751Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.760Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:50.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:52.216Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AXBDRNFQ2ZEWWQMS8Y2C1.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:30:52.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:17.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.677Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.911Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.919Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.942Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:20.372Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:22.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.325Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:47.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.791Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:50.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:52.218Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AZ60SJAMHV9AV9XQS4MJ4.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:31:52.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:52.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:52.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.196Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.386Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:17.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.705Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.915Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:20.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.196Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.307Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.434Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:47.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.615Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.782Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.790Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.946Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:50.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:52.218Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B10KTTH2B1ZVVDGYW0TCW.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:32:52.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.440Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:17.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.730Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.928Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.937Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:20.472Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:22.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:47.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.712Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.873Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.883Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:50.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:52.219Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B2V6VA0HVY0R99F0MH5HZ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:33:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:52.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:17.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:20.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:20.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:20.571Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:22.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:22.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.478Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:47.229Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.549Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.753Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:50.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:52.220Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B4NSV11NWXHB0SXZJMJJY.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:34:52.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:52.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.778Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.779Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.780Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.410Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:17.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.658Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.823Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.831Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:20.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:22.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:22.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.381Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.515Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:47.353Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:50.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:50.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:50.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.220Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B6GCW97TGRP7R06CR139D.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:35:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.376Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.476Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:17.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.799Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:20.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:20.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:22.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:23.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.415Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.613Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:47.469Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:50.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:50.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:50.548Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:52.221Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B8AZXXB6XW1DC2S9KKSP6.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:36:52.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.338Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:17.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.697Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.870Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:20.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:22.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.356Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.470Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.535Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:47.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:50.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:50.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:50.541Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:52.222Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BA5JYXBEZ1X7GS8P1N79R.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:37:52.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:52.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.989Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.682Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.421Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:17.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:20.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:20.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:20.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:20.926Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:22.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:22.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.729Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.356Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.474Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:47.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:50.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:52.223Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BC05ZQ0Z4A6VTEVEK7AQ5.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:38:52.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:52.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.474Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:17.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.838Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.847Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:20.265Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:22.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.466Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:47.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:50.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:50.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:50.568Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.225Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BDTS1TWCZ2F6DAQFFJYYJ.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:39:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:17.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.653Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.843Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:20.276Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:22.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:22.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:23.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.476Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:47.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.599Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.754Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:50.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:52.226Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BFNC2Q2KXAKJPNX75N0Y3.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:40:52.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:52.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.731Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.987Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.988Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.988Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.989Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.989Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.990Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.991Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.444Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:17.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.741Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.924Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.933Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:20.324Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:22.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:47.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.649Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.810Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.818Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:50.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:52.227Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BHFZ3TJYYY26MDDGWMH17.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:41:52.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:52.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.363Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.478Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:17.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:20.471Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:22.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:22.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.757Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.446Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.552Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:47.000Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:47.629Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:50.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:50.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:50.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:50.690Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.228Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BKAJ4E85V3R65ARJMJFCH.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:42:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:53.245Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.233Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.464Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.465Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:17.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.607Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.758Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.767Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:20.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:22.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:23.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.138Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.382Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:47.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.679Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.865Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:50.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:52.229Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BN55515B8XYZ4F97SYTEW.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:43:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.166Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.381Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.463Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.556Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:17.546Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:20.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:20.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:20.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:20.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:22.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.419Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:47.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.844Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:50.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.230Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BPZR6S0K2QSHKV6J0YV42.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:44:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.629Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.630Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.631Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.499Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:17.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.846Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:20.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:20.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:20.462Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:22.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:22.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.303Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.421Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:47.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:50.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:50.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:50.613Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:52.231Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BRTB7YFPV7XK1WFP1TJ8X.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:45:52.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:52.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.325Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.438Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:17.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.771Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.936Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:20.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:22.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:22.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:23.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.997Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:47.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.942Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:50.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:50.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:50.393Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.231Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BTMY7SAS8WN12RDDDGEXD.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:46:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.454Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:17.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.880Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:20.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:20.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:20.522Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:22.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:22.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.622Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.773Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.776Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.990Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.139Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.171Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.419Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.538Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:47.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.831Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:50.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:50.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:50.420Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:52.233Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BWFH81FERS7X0VQC5NPK2.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:47:52.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.151Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.361Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.464Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:17.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.571Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.745Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.757Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:20.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:22.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:26.007Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.473Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:47.275Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:50.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:50.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:50.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.233Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BYA49YYSGNH6VC3PY8YKQ.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:48:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.419Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.541Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:17.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.927Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:20.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:20.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:20.552Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:22.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:47.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.700Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.858Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.867Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:50.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.234Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C04QAHA645TNSMTSQ7Y8C.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:49:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.262Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.359Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.452Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:17.264Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.727Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.890Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.898Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:20.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:22.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:22.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.311Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.493Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:47.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.637Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.810Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.821Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:50.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:52.235Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C1ZABCJH806HDW2HDBA8G.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:50:52.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:52.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.158Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.309Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.440Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.586Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:17.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.666Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.833Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.842Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:20.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:22.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:22.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:23.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.454Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:47.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.622Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.781Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.791Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:50.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:52.235Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C3SXBT9RKZ2ZM51W7J7D7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:51:52.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:52.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:52.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:00.589Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.414Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:17.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:18.268Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.860Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:20.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:20.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:20.454Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:22.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:22.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.515Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.517Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.361Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.477Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:47.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.576Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.750Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.759Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:50.175Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.237Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C5MGCVA7P3DK4H7ETDZ0F.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:52:52.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:53.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.218Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.444Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:17.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.614Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.769Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.778Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:20.178Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:22.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:22.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:23.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.351Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.469Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:47.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.641Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.816Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.825Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:50.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:52.238Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C7F3DSD3EQYZ8BQQBXMVF.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:53:52.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:52.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.458Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:17.394Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:20.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:20.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:20.574Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:22.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:22.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:30.589Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.474Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:47.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.687Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.854Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:50.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:52.239Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C99PF07EN0K616FG1ETDQ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:54:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:52.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:52.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:53.243Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.625Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.626Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.626Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.552Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.552Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.553Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.554Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.554Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.252Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.367Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.485Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:17.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.585Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.773Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.785Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:20.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:22.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:23.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:47.272Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.747Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.931Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:50.402Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.240Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CB49GTKM1SYW5WSGJ69HF.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.443Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:17.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.687Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.848Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.858Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:22.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:22.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.399Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.519Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:47.193Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.517Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.518Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.519Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.638Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.862Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.880Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.949Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:50.375Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.241Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CCYWHQGE0B8GJAXJ9GJEB.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:56:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.632Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.632Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.633Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.457Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:17.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.772Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:20.398Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:22.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:22.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.757Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.244Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.373Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.487Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:47.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.686Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.884Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.898Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:50.362Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.243Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CESFJP0TD6WK6ETVW3ZG7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:57:52.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:53.242Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.281Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.543Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:17.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.863Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:20.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:20.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:20.499Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:22.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:23.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:23.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.052Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.744Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.356Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.475Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.550Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.999Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:47.598Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:50.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:50.159Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:50.169Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:50.577Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:52.244Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CGM2MSHXX60MM19Z1ERC3.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:58:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:52.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.515Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.515Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.516Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.516Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.371Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.479Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:17.199Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.607Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.785Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.795Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:20.199Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:22.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:22.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.744Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.423Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.528Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:47.193Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.680Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.854Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.865Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:50.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.245Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CJENMJJDBZDM0WPYTHM3H.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:59:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.339Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.470Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.574Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:17.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:18.268Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:20.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:20.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:20.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:20.701Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:22.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.765Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.766Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.532Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:47.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.623Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.776Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.787Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:50.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:52.245Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CM98NVVBA7P3REVPPSGTQ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:00:52.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.390Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.494Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:17.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.754Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:20.392Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:22.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:23.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.413Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.549Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:47.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.515Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.517Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.518Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.519Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:50.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:50.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:50.548Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:52.246Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CP3VPVTRYYR0ZA9RMSA89.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:01:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:52.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.731Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:17.223Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.730Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.930Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:20.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.410Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.521Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:47.159Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.622Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.779Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.792Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:50.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.247Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CQYEQ1X47PMZV76ZPAAPG.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T10:02:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.664Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:17.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.640Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.791Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.801Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:20.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:22.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:22.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.170Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.190Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.275Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.514Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.710Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:47.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.752Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.931Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:50.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:52.248Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CSS1R9KC59SZV59RZGQQW.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:03:52.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.482Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:17.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.696Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.884Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.894Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:20.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:22.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:22.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:23.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.198Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.500Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.619Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:47.338Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:50.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:50.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:50.634Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:52.249Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CVKMRJRN07F80W2ZYVB75.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:04:52.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.636Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.637Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.637Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:52.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.556Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:17.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.610Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.775Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.789Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:20.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:22.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:22.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.773Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.775Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.138Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.511Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:47.300Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.911Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:50.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:50.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:50.620Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:52.250Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CXE7S7FBGEA2T4FJKN9EW.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:05:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.630Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.631Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:52.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.514Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.514Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.514Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.515Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.729Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.504Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:17.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.663Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.833Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.844Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:20.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.638Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.639Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.640Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:22.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:22.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.779Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.780Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.781Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.505Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:47.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.654Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.806Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.816Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:50.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.251Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CZ8TT27KX8DG5D9QY97AD.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T10:06:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.185Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.426Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.564Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.467Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.477Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.545Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:17.656Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:20.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:20.272Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:20.286Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:20.738Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:22.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.630Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:22.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:23.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.764Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.765Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.171Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.413Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.527Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:47.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.532Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.718Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.732Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:50.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:52.251Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D13DVDH2PPWN8MKEYRHVG.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:07:52.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:52.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:52.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.780Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.780Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.781Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.166Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.405Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.530Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:17.339Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.854Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:20.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:20.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:20.458Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:22.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:22.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.768Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.768Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.154Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.283Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.406Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.527Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:47.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:48.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.793Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:50.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:50.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:50.456Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.252Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D2Y0WE5H84YRPP2RN61YV.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T10:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.640Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.640Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.641Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.744Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.744Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.160Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.479Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.590Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:17.296Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.891Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:20.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:20.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:20.485Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:22.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.643Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.644Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.645Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:22.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.382Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.514Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:47.223Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.682Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.881Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:50.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:52.253Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D4RKXJF2MMG24WBDZEFAV.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:09:52.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:52.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:53.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.591Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:17.322Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.752Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.905Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.916Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:20.330Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:22.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.275Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.550Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.664Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:47.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.823Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:50.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:52.254Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D6K6YCYDJXFRPFDJ7CD0P.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:10:52.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:52.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:52.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.766Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.360Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.456Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:17.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.550Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.704Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.715Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:20.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:22.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:23.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.350Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.477Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:47.170Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.601Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.755Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.766Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:50.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:52.255Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D8DSZYNYFF7J9VTHJDQM4.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:11:52.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:52.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.412Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.509Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:17.001Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:17.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.786Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:20.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.511Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:47.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.517Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.668Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.677Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:50.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:52.256Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DA8D0X5ATSE8KKZSDA617.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:12:52.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.267Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.373Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.489Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:17.172Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.557Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.708Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.719Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:20.139Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:22.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:22.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.154Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.210Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.463Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.583Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:47.160Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.632Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.806Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.817Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:50.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:52.257Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DC301X0Z2YGQ6TXNZ3JWN.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:13:52.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:52.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.768Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.769Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.769Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:00.614Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.733Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.987Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.185Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.322Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.594Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:17.299Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.774Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.937Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:20.360Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:22.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.630Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.631Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.632Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:22.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:23.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.169Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.324Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.481Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.635Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.540Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:47.008Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:47.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:50.190Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:50.412Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:50.423Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:50.850Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.258Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DDXK1BAXCFS8AZCXC7D4D.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T10:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.815Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.818Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.819Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.552Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:13.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.229Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.502Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.658Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.041Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:17.453Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:20.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:20.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:20.306Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:20.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:22.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:23.243Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.766Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.768Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.191Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.463Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.603Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:47.743Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:50.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:50.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:50.675Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:50.689Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:51.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:52.260Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DFR63QS9JXD9KPZAF177B.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:15:52.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:52.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.764Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.764Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.409Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.518Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:17.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:18.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.930Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:20.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:20.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:20.503Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:22.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.640Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.640Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.641Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:22.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.766Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.767Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.200Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.339Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.502Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.667Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:47.443Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:50.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:50.138Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:50.593Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:52.261Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DHJS518G9SZWDVNM27CGQ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:16:52.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:52.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:53.242Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.769Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.770Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.771Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.172Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.494Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.627Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:17.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.637Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.791Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.802Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:20.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:22.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.636Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.637Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.638Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:22.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:23.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.764Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.765Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.766Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.391Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.526Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.672Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:47.322Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:50.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:50.193Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:50.664Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:52.262Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DKDC60ARJ5FCZTW1X75K8.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:17:52.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.646Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.647Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.648Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:52.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.768Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.769Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.769Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:13.682Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.509Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.651Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:17.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.753Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.909Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.921Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:20.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:22.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.630Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.631Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:23.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:26.008Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:26.008Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.750Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.751Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.272Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.509Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.997Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:47.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:50.152Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:50.350Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:50.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:50.821Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:52.263Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DN7Z7DZFP7SGN2VJ8EDAY.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:18:52.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.630Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:52.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:53.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.732Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:11.008Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.175Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.486Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.654Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:17.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.515Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.862Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:20.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:20.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:20.466Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.017Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:22.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.633Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.634Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.634Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:22.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.444Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.578Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:47.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.831Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:50.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:50.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:50.492Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:52.265Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DQ2J8ADP8K5ZSEE4P66MN.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:19:52.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.644Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.644Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.645Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:52.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.775Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.776Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.731Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.170Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.450Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.570Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:17.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
<----end of log for "prometheus-k8s-0"/"prometheus"

Oct 13 10:20:18.965: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c config-reloader -n openshift-monitoring'
Oct 13 10:20:19.167: INFO: Log for pod "prometheus-k8s-0"/"config-reloader"
---->
level=info ts=2022-10-11T16:46:34.883564617Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=fc23b05)"
level=info ts=2022-10-11T16:46:34.883645793Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221006-18:49:18)"
level=info ts=2022-10-11T16:46:34.883857507Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080
level=info ts=2022-10-11T16:46:35.623818554Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
level=info ts=2022-10-11T16:46:35.624019685Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
<----end of log for "prometheus-k8s-0"/"config-reloader"

Oct 13 10:20:19.167: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c thanos-sidecar -n openshift-monitoring'
Oct 13 10:20:19.356: INFO: Log for pod "prometheus-k8s-0"/"thanos-sidecar"
---->
level=info ts=2022-10-11T16:46:35.257353673Z caller=sidecar.go:106 msg="no supported bucket was configured, uploads will be disabled"
level=info ts=2022-10-11T16:46:35.257637254Z caller=options.go:28 protocol=gRPC msg="enabling server side TLS"
level=info ts=2022-10-11T16:46:35.258249839Z caller=options.go:58 protocol=gRPC msg="server TLS client verification enabled"
level=info ts=2022-10-11T16:46:35.259349236Z caller=sidecar.go:326 msg="starting sidecar"
level=info ts=2022-10-11T16:46:35.260723836Z caller=intrumentation.go:60 msg="changing probe status" status=healthy
level=info ts=2022-10-11T16:46:35.260766267Z caller=http.go:63 service=http/server component=sidecar msg="listening for requests and metrics" address=127.0.0.1:10902
level=info ts=2022-10-11T16:46:35.261368472Z caller=intrumentation.go:48 msg="changing probe status" status=ready
level=info ts=2022-10-11T16:46:35.261697172Z caller=grpc.go:123 service=gRPC/server component=sidecar msg="listening for serving gRPC" address=[10.128.23.18]:10901
level=info ts=2022-10-11T16:46:35.264567913Z caller=reloader.go:183 component=reloader msg="nothing to be watched"
level=info ts=2022-10-11T16:46:35.26827879Z caller=tls_config.go:191 service=http/server component=sidecar msg="TLS is disabled." http2=false
level=info ts=2022-10-11T16:46:35.27134032Z caller=sidecar.go:166 msg="successfully loaded prometheus version"
level=info ts=2022-10-11T16:46:35.490427582Z caller=sidecar.go:188 msg="successfully loaded prometheus external labels" external_labels="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-0\"}"
level=info ts=2022-10-11T16:46:35.49056707Z caller=intrumentation.go:48 msg="changing probe status" status=ready
<----end of log for "prometheus-k8s-0"/"thanos-sidecar"

Oct 13 10:20:19.357: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c prometheus-proxy -n openshift-monitoring'
Oct 13 10:20:19.563: INFO: Log for pod "prometheus-k8s-0"/"prometheus-proxy"
---->
2022/10/11 16:46:35 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s
2022/10/11 16:46:35 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2022/10/11 16:46:35 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2022/10/11 16:46:35 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"
2022/10/11 16:46:35 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s
2022/10/11 16:46:35 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled
2022/10/11 16:46:35 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth
I1011 16:46:35.702411       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key
2022/10/11 16:46:35 http.go:107: HTTPS: listening on [::]:9091
<----end of log for "prometheus-k8s-0"/"prometheus-proxy"

Oct 13 10:20:19.563: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c kube-rbac-proxy -n openshift-monitoring'
Oct 13 10:20:19.742: INFO: Log for pod "prometheus-k8s-0"/"kube-rbac-proxy"
---->
I1011 16:46:35.811083       1 main.go:151] Reading config file: /etc/kube-rbac-proxy/config.yaml
I1011 16:46:35.816619       1 main.go:181] Valid token audiences: 
I1011 16:46:35.816725       1 main.go:305] Reading certificate files
I1011 16:46:35.816841       1 reloader.go:98] reloading key /etc/tls/private/tls.key certificate /etc/tls/private/tls.crt
I1011 16:46:35.817159       1 main.go:339] Starting TCP socket on 0.0.0.0:9092
I1011 16:46:35.817755       1 main.go:346] Listening securely on 0.0.0.0:9092
<----end of log for "prometheus-k8s-0"/"kube-rbac-proxy"

Oct 13 10:20:19.742: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c prom-label-proxy -n openshift-monitoring'
Oct 13 10:20:19.880: INFO: Log for pod "prometheus-k8s-0"/"prom-label-proxy"
---->
2022/10/11 16:46:36 Listening insecurely on 127.0.0.1:9095
<----end of log for "prometheus-k8s-0"/"prom-label-proxy"

Oct 13 10:20:19.880: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c kube-rbac-proxy-thanos -n openshift-monitoring'
Oct 13 10:20:20.023: INFO: Log for pod "prometheus-k8s-0"/"kube-rbac-proxy-thanos"
---->
I1011 16:46:36.260993       1 main.go:181] Valid token audiences: 
I1011 16:46:36.262727       1 main.go:305] Reading certificate files
I1011 16:46:36.262851       1 dynamic_cafile_content.go:167] Starting client-ca::/etc/tls/client/client-ca.crt
I1011 16:46:36.263138       1 main.go:339] Starting TCP socket on [10.128.23.18]:10902
I1011 16:46:36.263528       1 main.go:346] Listening securely on [10.128.23.18]:10902
<----end of log for "prometheus-k8s-0"/"kube-rbac-proxy-thanos"

Oct 13 10:20:20.023: INFO: Running 'oc --kubeconfig=.kube/config describe pod/prometheus-k8s-1 -n openshift-monitoring'
Oct 13 10:20:20.181: INFO: Describing pod "prometheus-k8s-1"
Name:                 prometheus-k8s-1
Namespace:            openshift-monitoring
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 ostest-n5rnf-worker-0-8kq82/10.196.2.72
Start Time:           Tue, 11 Oct 2022 16:46:12 +0000
Labels:               app=prometheus
                      app.kubernetes.io/component=prometheus
                      app.kubernetes.io/instance=k8s
                      app.kubernetes.io/managed-by=prometheus-operator
                      app.kubernetes.io/name=prometheus
                      app.kubernetes.io/part-of=openshift-monitoring
                      app.kubernetes.io/version=2.29.2
                      controller-revision-hash=prometheus-k8s-77f9b66476
                      operator.prometheus.io/name=k8s
                      operator.prometheus.io/shard=0
                      prometheus=k8s
                      statefulset.kubernetes.io/pod-name=prometheus-k8s-1
Annotations:          k8s.v1.cni.cncf.io/network-status:
                        [{
                            "name": "kuryr",
                            "interface": "eth0",
                            "ips": [
                                "10.128.23.35"
                            ],
                            "mac": "fa:16:3e:94:4b:ef",
                            "default": true,
                            "dns": {}
                        }]
                      k8s.v1.cni.cncf.io/networks-status:
                        [{
                            "name": "kuryr",
                            "interface": "eth0",
                            "ips": [
                                "10.128.23.35"
                            ],
                            "mac": "fa:16:3e:94:4b:ef",
                            "default": true,
                            "dns": {}
                        }]
                      kubectl.kubernetes.io/default-container: prometheus
                      openshift.io/scc: nonroot
Status:               Running
IP:                   10.128.23.35
IPs:
  IP:           10.128.23.35
Controlled By:  StatefulSet/prometheus-k8s
Init Containers:
  init-config-reloader:
    Container ID:  cri-o://2b6bef26018b326930cad08bb9d3b8b0c61609a26327e0b8383a5ffbcca91d4c
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/prometheus-config-reloader
    Args:
      --watch-interval=0
      --listen-address=:8080
      --config-file=/etc/prometheus/config/prometheus.yaml.gz
      --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 11 Oct 2022 16:46:30 +0000
      Finished:     Tue, 11 Oct 2022 16:46:30 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:  prometheus-k8s-1 (v1:metadata.name)
      SHARD:     0
    Mounts:
      /etc/prometheus/config from config (rw)
      /etc/prometheus/config_out from config-out (rw)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
Containers:
  prometheus:
    Container ID:  cri-o://ff98d8a8604e6b4fd133088201e63266e8d65eef437dacd10abd3db0f68df31a
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf
    Port:          <none>
    Host Port:     <none>
    Args:
      --web.console.templates=/etc/prometheus/consoles
      --web.console.libraries=/etc/prometheus/console_libraries
      --config.file=/etc/prometheus/config_out/prometheus.env.yaml
      --storage.tsdb.path=/prometheus
      --storage.tsdb.retention.time=15d
      --web.enable-lifecycle
      --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/
      --web.route-prefix=/
      --web.listen-address=127.0.0.1:9090
      --web.config.file=/etc/prometheus/web_config/web-config.yaml
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:41 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        70m
      memory:     1Gi
    Readiness:    exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=120
    Environment:  <none>
    Mounts:
      /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro)
      /etc/prometheus/certs from tls-assets (ro)
      /etc/prometheus/config_out from config-out (ro)
      /etc/prometheus/configmaps/kubelet-serving-ca-bundle from configmap-kubelet-serving-ca-bundle (ro)
      /etc/prometheus/configmaps/serving-certs-ca-bundle from configmap-serving-certs-ca-bundle (ro)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /etc/prometheus/secrets/kube-etcd-client-certs from secret-kube-etcd-client-certs (ro)
      /etc/prometheus/secrets/kube-rbac-proxy from secret-kube-rbac-proxy (ro)
      /etc/prometheus/secrets/metrics-client-certs from secret-metrics-client-certs (ro)
      /etc/prometheus/secrets/prometheus-k8s-proxy from secret-prometheus-k8s-proxy (ro)
      /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls from secret-prometheus-k8s-thanos-sidecar-tls (ro)
      /etc/prometheus/secrets/prometheus-k8s-tls from secret-prometheus-k8s-tls (ro)
      /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml")
      /prometheus from prometheus-k8s-db (rw,path="prometheus-db")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
  config-reloader:
    Container ID:  cri-o://8f1de870d2f059356e38367f619aa070b2784584fd75705867ea64fbd0e41e46
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/prometheus-config-reloader
    Args:
      --listen-address=localhost:8080
      --reload-url=http://localhost:9090/-/reload
      --config-file=/etc/prometheus/config/prometheus.yaml.gz
      --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
      --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:41 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  10Mi
    Environment:
      POD_NAME:  prometheus-k8s-1 (v1:metadata.name)
      SHARD:     0
    Mounts:
      /etc/prometheus/config from config (rw)
      /etc/prometheus/config_out from config-out (rw)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
  thanos-sidecar:
    Container ID:  cri-o://05008e4f94d89864fe153ff8d78f28477f7a39b049faf05bb0f60f6472fc27f2
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247
    Ports:         10902/TCP, 10901/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      sidecar
      --prometheus.url=http://localhost:9090/
      --tsdb.path=/prometheus
      --grpc-address=[$(POD_IP)]:10901
      --http-address=127.0.0.1:10902
      --grpc-server-tls-cert=/etc/tls/grpc/server.crt
      --grpc-server-tls-key=/etc/tls/grpc/server.key
      --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:48 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  25Mi
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /etc/tls/grpc from secret-grpc-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
  prometheus-proxy:
    Container ID:  cri-o://7f58ea7cc403c27cdff172c8e8fda71659bd03f3474f139d85f5f707abe55558
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37
    Port:          9091/TCP
    Host Port:     0/TCP
    Args:
      -provider=openshift
      -https-address=:9091
      -http-address=
      -email-domain=*
      -upstream=http://localhost:9090
      -openshift-service-account=prometheus-k8s
      -openshift-sar={"resource": "namespaces", "verb": "get"}
      -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}
      -tls-cert=/etc/tls/private/tls.crt
      -tls-key=/etc/tls/private/tls.key
      -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
      -cookie-secret-file=/etc/proxy/secrets/session_secret
      -openshift-ca=/etc/pki/tls/cert.pem
      -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      -htpasswd-file=/etc/proxy/htpasswd/auth
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:48 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  20Mi
    Environment:
      HTTP_PROXY:   
      HTTPS_PROXY:  
      NO_PROXY:     
    Mounts:
      /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro)
      /etc/proxy/htpasswd from secret-prometheus-k8s-htpasswd (rw)
      /etc/proxy/secrets from secret-prometheus-k8s-proxy (rw)
      /etc/tls/private from secret-prometheus-k8s-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
  kube-rbac-proxy:
    Container ID:  cri-o://c375c94f8370593926824bdf14898b7fbabf403375bbedd3f399502fbcf51adc
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Port:          9092/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:9092
      --upstream=http://127.0.0.1:9095
      --config-file=/etc/kube-rbac-proxy/config.yaml
      --tls-cert-file=/etc/tls/private/tls.crt
      --tls-private-key-file=/etc/tls/private/tls.key
      --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:48 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        1m
      memory:     15Mi
    Environment:  <none>
    Mounts:
      /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw)
      /etc/tls/private from secret-prometheus-k8s-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
  prom-label-proxy:
    Container ID:  cri-o://1e75a55b09ea279ec7878c3b3fb2dbbcc9771651400c64368240fe20effe7d95
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60
    Port:          <none>
    Host Port:     <none>
    Args:
      --insecure-listen-address=127.0.0.1:9095
      --upstream=http://127.0.0.1:9090
      --label=namespace
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:56 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        1m
      memory:     15Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
  kube-rbac-proxy-thanos:
    Container ID:  cri-o://7780a1ec4a1b9561b06dc659c72b488406246bf2ba470d9e3190e650af070647
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c
    Port:          10902/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=[$(POD_IP)]:10902
      --upstream=http://127.0.0.1:10902
      --tls-cert-file=/etc/tls/private/tls.crt
      --tls-private-key-file=/etc/tls/private/tls.key
      --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      --allow-paths=/metrics
      --logtostderr=true
      --client-ca-file=/etc/tls/client/client-ca.crt
    State:          Running
      Started:      Tue, 11 Oct 2022 16:46:56 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     1m
      memory:  10Mi
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /etc/tls/client from metrics-client-ca (ro)
      /etc/tls/private from secret-prometheus-k8s-thanos-sidecar-tls (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqxsv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  prometheus-k8s-db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus-k8s-db-prometheus-k8s-1
    ReadOnly:   false
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s
    Optional:    false
  tls-assets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-tls-assets
    Optional:    false
  config-out:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  prometheus-k8s-rulefiles-0:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-k8s-rulefiles-0
    Optional:  false
  web-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-web-config
    Optional:    false
  secret-kube-etcd-client-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-etcd-client-certs
    Optional:    false
  secret-prometheus-k8s-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-tls
    Optional:    false
  secret-prometheus-k8s-proxy:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-proxy
    Optional:    false
  secret-prometheus-k8s-thanos-sidecar-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-thanos-sidecar-tls
    Optional:    false
  secret-kube-rbac-proxy:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-rbac-proxy
    Optional:    false
  secret-metrics-client-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  metrics-client-certs
    Optional:    false
  configmap-serving-certs-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      serving-certs-ca-bundle
    Optional:  false
  configmap-kubelet-serving-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kubelet-serving-ca-bundle
    Optional:  false
  secret-prometheus-k8s-htpasswd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-htpasswd
    Optional:    false
  metrics-client-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      metrics-client-ca
    Optional:  false
  secret-grpc-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-grpc-tls-bg9h55jpjel3o
    Optional:    false
  prometheus-trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-trusted-ca-bundle-2rsonso43rc5p
    Optional:  true
  kube-api-access-qqxsv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>


Oct 13 10:20:20.181: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c init-config-reloader -n openshift-monitoring'
Oct 13 10:20:20.371: INFO: Log for pod "prometheus-k8s-1"/"init-config-reloader"
---->
level=info ts=2022-10-11T16:46:30.977634094Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=fc23b05)"
level=info ts=2022-10-11T16:46:30.977909075Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221006-18:49:18)"
<----end of log for "prometheus-k8s-1"/"init-config-reloader"

Oct 13 10:20:20.371: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c prometheus -n openshift-monitoring'
Oct 13 10:20:21.395: INFO: Log for pod "prometheus-k8s-1"/"prometheus"
---->
level=error ts=2022-10-13T08:55:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:38.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:41.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:43.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.190Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:44.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:44.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:45.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:47.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.279Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88X3TFH7MJPSQB6BMJCJE4.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T08:55:47.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:47.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:48.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:48.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:49.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.766Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.929Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:50.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:50.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:52.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:52.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:53.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:54.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:55.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:55.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:55.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:56.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:55:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:58.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:55:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:00.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:04.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:05.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:06.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:08.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:10.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:11.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:12.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.938Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:14.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:14.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:17.350Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:17.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:18.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.843Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.849Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:20.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:20.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:22.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:22.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:22.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:23.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:24.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:25.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:25.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:26.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:28.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:29.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:30.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:32.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:34.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:36.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.717Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:43.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:44.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:44.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:45.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:47.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.280Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88YYDG2QN83TC50M1NPXMH.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T08:56:47.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:47.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:49.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.905Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:50.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:50.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:50.564Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:50.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:52.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:53.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:55.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:55.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:55.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:56.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:56:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:57.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:58.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:56:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:00.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:04.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:05.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:07.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:10.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:11.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:12.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:13.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.192Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:14.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:14.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:16.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:17.470Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:17.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.851Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:20.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:20.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:20.396Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:20.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:21.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:22.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:22.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:22.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:26.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:28.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:29.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:30.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:33.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:34.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:36.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:38.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:42.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.938Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:43.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:44.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:44.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:47.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.281Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF890S0HQYN3YBRX7ANZ2X91.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T08:57:47.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:47.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:48.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.496Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.659Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.665Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:50.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:50.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:51.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:52.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:52.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:52.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:53.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:54.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:55.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:56.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:57:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:57.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:57:59.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:00.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:05.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:08.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:11.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:13.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.138Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:14.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:14.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:17.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:17.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:18.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.511Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.673Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.678Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:20.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:20.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:22.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:22.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:22.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:25.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:26.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:27.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:28.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:29.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:30.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:34.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:35.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:37.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:38.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:41.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:42.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:43.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:44.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:44.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.282Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF892KKJB9NJ4A513HAVAT78.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T08:58:47.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.673Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.839Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.847Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:50.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:50.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:52.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:52.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:52.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:53.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:54.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:55.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:55.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:55.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:56.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:58:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:57.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:58.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:58:59.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:00.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:02.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:05.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:07.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:08.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:11.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:13.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.177Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.277Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:14.373Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:15.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:16.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:17.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:17.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.672Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.844Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.849Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:20.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:22.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:22.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:22.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:24.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:25.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:26.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:27.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:28.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:29.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:30.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:32.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:35.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:36.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:37.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:38.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:41.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:42.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:43.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:44.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:44.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:47.007Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.283Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF894E6KZW7XPPR4SVM2KBE6.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T08:59:47.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:48.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.884Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:50.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:50.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:50.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:50.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:52.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:53.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:54.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:55.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:55.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:56.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.614Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T08:59:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:58.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T08:59:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:00.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:01.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:02.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:03.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:04.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:05.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:06.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:08.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:10.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:11.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:13.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:14.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:15.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:17.437Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:17.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:19.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:20.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:20.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:20.537Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:20.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:21.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:22.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:22.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:22.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:25.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:26.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:29.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:30.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:31.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:34.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:35.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:36.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:37.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:38.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:41.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:43.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:44.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:44.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:46.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:47.200Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.284Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8968SMK6TRQYN77ADM68YP.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:00:47.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:47.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:49.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.603Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.772Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.778Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:50.196Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:50.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:51.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:52.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:52.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:52.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:53.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:55.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:55.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:56.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:00:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:58.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:00:59.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:01.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:04.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:05.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:08.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.977Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:11.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:12.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.157Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.254Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:14.327Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:14.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:15.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:17.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:17.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:18.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.659Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:19.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.841Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:20.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:20.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:21.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:22.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:22.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:23.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:24.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:25.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:25.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:25.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:26.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:28.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:29.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:30.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:34.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:35.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:36.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:37.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:38.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:40.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:41.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:42.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:43.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.126Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:44.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:44.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:45.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.285Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8983CNEEGHWNVY2EKX52ZD.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:01:47.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:47.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.604Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.770Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.777Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:49.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:50.174Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:50.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:52.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:52.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:52.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:54.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:55.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:01:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:58.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:01:59.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:00.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:01.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:04.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:05.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:13.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:14.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:14.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:17.184Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:17.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:18.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:18.271Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.683Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.857Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:20.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:20.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:22.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:22.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:22.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:24.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:25.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:26.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.112Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:29.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:30.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:31.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:34.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:35.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:36.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:38.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:40.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:42.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:43.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.170Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:44.327Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:44.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:45.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.286Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF899XZPD8RZFXMTNF9R01Q8.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:02:47.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:47.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:49.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.672Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.828Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.833Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:50.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:50.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:51.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:52.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:52.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:54.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:55.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:55.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:56.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:02:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:58.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:02:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:01.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:04.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:05.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:08.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.717Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:14.325Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:14.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:17.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:17.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:18.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.544Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.730Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.738Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:20.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:20.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:21.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:22.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:22.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:22.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:25.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:25.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:26.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:28.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:29.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:30.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:31.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:34.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:35.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:38.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.737Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:40.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:42.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:43.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:44.283Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:44.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:47.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.287Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89BRJQ5VGH5B4ES5T0VD23.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:03:47.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:47.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:48.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.499Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.667Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:50.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:50.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:51.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:52.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:53.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:54.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:55.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:55.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:55.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:55.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:56.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:03:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:58.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:03:59.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:00.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:02.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:04.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:05.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:07.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:08.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:11.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.157Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:14.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:14.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:17.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:17.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.633Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.830Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.836Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:20.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:20.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:22.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:22.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:22.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:26.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:29.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:30.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:34.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:35.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:36.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:38.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:40.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:41.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:44.322Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:44.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:46.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:47.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.287Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89DK5QPY4YYJPGHQSV6AAE.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:04:47.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:47.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:48.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:49.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.515Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.517Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.589Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.748Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.754Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:50.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:50.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:52.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:55.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:56.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:04:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:57.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:04:59.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:00.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:01.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:05.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:06.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:08.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:10.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:11.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:13.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:14.353Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:14.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.555Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:16.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:17.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:17.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.744Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.914Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.920Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:20.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:20.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:22.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:22.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:22.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:25.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:26.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:27.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:28.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:29.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:30.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:35.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:38.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:40.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:41.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:43.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:44.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:44.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:47.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.288Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89FDRRP8NAYMWCK0W4CK7T.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:05:47.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:47.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:49.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.595Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.776Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.783Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:50.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:52.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:52.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:52.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:54.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:55.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:56.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:05:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:58.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:05:59.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:05.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:07.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:08.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:10.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:11.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:12.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:13.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:14.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:14.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:15.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:16.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:16.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:17.223Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:17.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:18.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.730Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.893Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:20.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:20.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:22.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:22.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:22.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:23.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:24.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:25.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:26.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:28.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:29.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:30.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:32.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:34.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:35.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:36.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:38.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:40.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:43.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:44.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:44.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:45.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:47.286Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.289Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89H8BS174Q8Y3XQ8MMCXJ5.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:06:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:47.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.559Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.732Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.741Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:50.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:50.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:50.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:50.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:52.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:53.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:54.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:55.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:55.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:56.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:06:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:06:59.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:05.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:07.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:08.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:09.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:11.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.938Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:13.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:14.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:14.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:17.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:17.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:18.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.596Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.763Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.769Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:20.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:20.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:22.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:22.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:22.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:23.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:27.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:28.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:29.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:30.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:34.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:35.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:36.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:38.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:40.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:42.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:43.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:44.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:44.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:47.229Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.290Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89K2YS72C25Q0KNVR1T7D2.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:07:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:47.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:49.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.893Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.899Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:50.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:50.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:51.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:52.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:52.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:52.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:53.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:55.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:55.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:56.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:07:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:58.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:07:59.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:00.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:01.477Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:04.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:08.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:10.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:11.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:13.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:14.276Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:14.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:17.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.594Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.756Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:20.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:20.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:22.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:22.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:22.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:23.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:29.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:30.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:34.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:35.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:36.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:37.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:38.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:40.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:42.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:43.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.165Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:44.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:44.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:47.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.291Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89MXHV6VR63J4QYTT53N0Q.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:08:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:47.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.685Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.847Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.853Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:50.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:50.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:52.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:52.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:52.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:53.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:54.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:55.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:55.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:55.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:56.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:08:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:58.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:08:59.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:00.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:01.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:02.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:04.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:05.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:07.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:08.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:10.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:13.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:14.304Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:14.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:17.254Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:17.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:18.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.724Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.890Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:20.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:20.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:22.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:22.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:22.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:23.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:25.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:25.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:25.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:25.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:26.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.122Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:29.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:30.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:35.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:37.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:38.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:43.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:44.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:44.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:45.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.292Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89PR4WCB64B419ZHHPY0G9.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:09:47.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:47.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:48.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:49.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.901Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:50.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:50.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:50.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:50.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:51.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:52.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:53.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:54.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:55.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:55.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:55.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:55.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:56.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:09:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:09:59.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:00.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:04.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:05.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:07.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:08.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:11.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:13.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:14.281Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:14.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:16.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:17.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:17.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.496Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.664Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.671Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:20.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:20.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:22.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:22.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:24.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:25.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:25.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:26.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:29.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:30.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:31.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:33.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:35.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:36.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:38.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:40.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:41.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:43.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:44.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:45.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:47.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.292Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89RJQWR34GNS5WS6WNWC1P.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:10:47.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:47.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:49.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.664Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.817Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.823Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:50.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:50.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:52.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:52.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:52.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:53.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:54.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:55.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:56.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:10:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:58.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:10:59.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:01.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:04.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:05.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:06.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:07.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:10.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:11.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:12.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.162Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:14.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:14.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:17.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:17.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:18.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.565Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.733Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.740Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:20.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:20.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.016Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.016Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:22.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:22.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:22.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:23.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:24.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:25.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:25.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:25.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:26.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:28.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:29.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:30.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:32.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:35.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:41.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.152Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:44.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:47.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.294Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89TDAXXJ1YG1BAASXSYYZP.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:11:47.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:48.265Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.606Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.770Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.776Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:50.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:50.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:52.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:52.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:52.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:54.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:55.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:56.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:11:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:57.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:58.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:11:59.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:00.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:01.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:02.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:04.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:05.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:08.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:11.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:12.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:13.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:14.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:14.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:17.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:17.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:19.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.711Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.892Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.901Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:20.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:20.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:22.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:22.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:24.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:25.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:25.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:26.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:28.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:29.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:34.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:35.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:37.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:38.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:43.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.142Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:44.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:44.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:45.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:47.162Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.295Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89W7XYBFRCKEQ7T69FCCJ8.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:12:47.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:47.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:48.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.573Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.805Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:50.244Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:50.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:52.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:53.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:54.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:55.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:56.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:12:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:57.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:12:59.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:00.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:02.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:04.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:05.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:06.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:08.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:11.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:13.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.170Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:14.327Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:14.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.467Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.468Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.541Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:17.471Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:17.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.894Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:20.526Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:20.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:21.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:22.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:22.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:22.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:24.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:25.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:25.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:28.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:29.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:30.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:31.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:32.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:33.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:34.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:35.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:36.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:40.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:42.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:43.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:44.430Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:44.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.296Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89Y2H03C57MW5JRRZS9G6S.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:13:47.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:48.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:49.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.672Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.848Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.857Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:50.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:52.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:52.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:52.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:53.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:54.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:55.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:56.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:13:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:57.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:58.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:13:59.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:00.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:02.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:04.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:05.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:07.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:08.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:13.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.156Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:14.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:15.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:16.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:17.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:17.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:18.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.690Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.917Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:20.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:20.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:21.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:22.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:22.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:22.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:23.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:25.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:25.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:28.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:29.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:30.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:34.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:36.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:37.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:38.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:41.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:44.306Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:44.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:45.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:47.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.297Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89ZX410CD1V1DRNMY17P14.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:14:47.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:47.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.587Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.756Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.763Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:50.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:50.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:51.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:52.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:52.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:52.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:54.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:55.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:55.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:14:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:58.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:14:59.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:00.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:04.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:05.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:06.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:07.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:08.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:10.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:11.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:13.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:14.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:14.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:17.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:17.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.574Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.741Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.748Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:20.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:20.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:21.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:22.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:22.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:22.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:24.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:25.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:28.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:29.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:30.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:34.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:35.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:38.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:41.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.178Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.267Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:44.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:44.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:46.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:47.276Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.297Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A1QQ1KQNJ4WP6Z2QWGHB5.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:15:47.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:47.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:49.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.683Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.867Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.874Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:50.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:52.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:52.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:52.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:54.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:55.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:55.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:55.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:56.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.614Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:15:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:57.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:58.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:15:59.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:00.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:02.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:04.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:05.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:08.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:11.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:12.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:13.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.162Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.254Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:14.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:14.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:17.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:17.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:18.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.653Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.814Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.820Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:20.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:20.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:22.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:22.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:22.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:23.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:24.514Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:24.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:25.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:25.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:25.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:26.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:28.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:29.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:30.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:31.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:34.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:36.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:38.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:40.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:43.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:44.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:44.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:47.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.299Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A3JA23GCK1EAQM9PN6E8N.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:16:47.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:47.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.578Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.745Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.752Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:50.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:50.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:52.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:52.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:52.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:53.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:54.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:55.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:56.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.055Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.056Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:16:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:57.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:58.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:16:59.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:00.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:04.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:05.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:08.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:11.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:14.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:14.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:17.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:17.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:18.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.710Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.868Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.876Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:20.309Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:20.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:22.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:22.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:22.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:24.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:25.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:25.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:27.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:29.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:30.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:34.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:38.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:44.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:44.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:47.229Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.301Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A5CX5ZFFF8AHYH7CTJHHB.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:17:47.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:47.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:49.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.716Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.893Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:50.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:50.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:52.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:52.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:52.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:54.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:55.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:55.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:55.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:55.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:56.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:17:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:57.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:58.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:17:59.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:00.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:01.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:03.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:05.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:08.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:10.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:12.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:13.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.106Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.151Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:14.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:14.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:17.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:17.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:18.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.635Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:19.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.795Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.803Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:20.218Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:20.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:21.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:22.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:22.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:22.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:23.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:24.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:25.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:26.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:28.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:29.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:30.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:31.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:34.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:38.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:41.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.184Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.273Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:44.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:44.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:47.277Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.302Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A77G5DKKDXJ1ESMHZ5STT.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:18:47.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.917Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.925Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:50.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:50.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:50.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:52.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:54.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:55.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:55.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:18:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:57.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:58.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:18:59.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:04.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:05.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:07.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:08.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:10.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:13.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.408Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:14.544Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:14.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:15.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:17.218Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:17.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:18.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.497Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.659Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.667Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:20.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:20.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:20.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:22.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:22.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:22.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:25.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:25.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.121Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:29.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:30.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:31.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:34.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:36.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:37.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:38.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:40.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:43.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:44.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:44.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:45.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:47.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.305Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A9238H9NX9JQ3S6NEZ4FK.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:19:47.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.535Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.709Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:50.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:50.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:52.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:52.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:55.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:56.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:19:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:57.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:58.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:19:59.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:00.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:01.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:02.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.106Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:04.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:07.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:08.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:11.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:14.406Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:15.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:16.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:17.383Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:17.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:18.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:19.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.793Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:19.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:20.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:20.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:22.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:22.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:22.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:23.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:25.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:25.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:25.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:26.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:28.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:29.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:30.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:31.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:32.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:34.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:35.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:36.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:37.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:38.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:40.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:41.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:42.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:43.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.283Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:44.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:44.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:47.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.305Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AAWP94GF4N59S493CWJ46.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:20:47.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:48.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:49.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.548Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.703Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.709Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:50.160Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:50.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:51.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:52.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:52.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:52.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:53.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:54.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:55.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:20:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:58.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:20:59.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:00.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:01.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:02.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:04.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:05.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:08.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:11.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.277Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:14.362Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:14.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:17.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.557Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:19.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.714Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.720Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:20.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:20.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:22.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:22.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:22.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:23.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:25.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:28.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:29.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:30.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:34.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:35.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:36.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:37.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:38.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.265Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:44.351Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:44.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:45.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:47.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.306Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ACQ9A90SPAG11XR3CB5SA.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:21:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:47.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:48.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.778Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:49.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:50.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:50.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:52.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:54.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:55.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:55.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:55.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:21:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:58.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:21:59.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:00.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:01.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:04.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:05.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:07.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:08.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:09.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:11.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:14.371Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:14.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:17.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:18.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:18.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.499Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.661Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.668Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:20.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:20.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:20.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:20.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:21.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:22.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:22.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:22.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:25.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:25.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:26.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:29.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:30.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:33.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:34.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:35.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:37.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:38.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:39.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:41.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:43.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:44.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:45.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:47.190Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.307Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AEHWB8FZEWYPT4XFSG9G3.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:22:47.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:47.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:49.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.487Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.654Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.663Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:50.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:50.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:50.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:50.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:52.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:52.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:52.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:55.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:55.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:56.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:22:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:58.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:22:59.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:00.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:02.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:04.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:05.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:07.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:08.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:09.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:10.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:12.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.177Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:14.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:14.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:15.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:16.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:17.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:18.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.824Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:19.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:20.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:20.420Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:20.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:22.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:22.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:22.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:23.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:24.514Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:24.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:25.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:27.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:29.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:30.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:33.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:34.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:36.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:37.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:38.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:40.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.277Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:44.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:44.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:45.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:46.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:47.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.311Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AGCFFW8KE3YM61F7YWBTD.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:23:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:47.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:49.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.629Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.804Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:50.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:50.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:50.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:50.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:51.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:52.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:52.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:52.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:54.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:55.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:55.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:55.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:56.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:23:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:58.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:23:59.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:00.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:02.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:04.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:05.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:07.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:08.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:11.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:12.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:13.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.339Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:14.437Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:14.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:17.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:17.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.591Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.753Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:20.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:20.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:22.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:22.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:23.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:27.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:29.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:30.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:34.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:37.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:38.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:40.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:41.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:43.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.281Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:44.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:44.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.470Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.470Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.540Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.312Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AJ72FSNFRE9H7V08STVMJ.tmp-for-creation: no space left on device"
level=warn ts=2022-10-13T09:24:47.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:47.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:49.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.782Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:49.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:50.350Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:50.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:51.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:52.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:52.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:52.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:54.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:55.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:55.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:55.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:56.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:24:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:57.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:24:59.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:04.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:05.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:08.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:11.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:13.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.159Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:14.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:14.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:16.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:17.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:17.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.596Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.768Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:20.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:20.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:21.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:22.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:22.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:22.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:24.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:25.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:25.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:26.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:28.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:29.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:30.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:32.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:34.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:35.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:36.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:37.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:38.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:42.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:43.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.191Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:44.385Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:44.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:47.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.312Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AM1NG3F7MP0QXFC8D58QR.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:25:47.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:47.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:49.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.622Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.778Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.786Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:50.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:50.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:50.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:50.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:50.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:50.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:52.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:52.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:52.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:53.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:54.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:55.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:55.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:55.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:56.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:25:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:57.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:58.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:25:59.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:00.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:04.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:05.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:06.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:08.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:10.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:11.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:13.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:14.390Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:14.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:15.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:17.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:17.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:18.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.653Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.832Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.840Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:20.233Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:20.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:22.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:22.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:22.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:25.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:25.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:26.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:28.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:29.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:34.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:35.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:36.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:38.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:41.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:43.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:44.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:44.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:47.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.313Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ANW8H9XM7K3AGZWMNFS1D.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:26:47.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:47.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:48.265Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.698Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.868Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:50.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:50.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:50.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:52.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:53.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:54.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:26:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:26:59.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:00.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:01.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.478Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.478Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.479Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.479Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:01.480Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:04.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:05.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:08.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:10.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:11.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:14.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:15.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:16.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:17.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:17.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.540Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.695Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:20.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:20.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:22.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:22.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:22.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:23.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:24.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:25.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:25.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:26.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:27.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:29.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:30.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:34.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:35.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:38.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:40.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:41.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:42.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:43.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.264Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:44.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:44.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:45.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:47.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.315Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AQPVJ52KYQXHPG5FR30G8.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:27:47.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:49.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.565Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.720Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.728Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:50.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:50.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:52.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:52.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:52.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:55.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:55.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:55.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:55.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:56.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:27:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:58.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:27:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:02.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:03.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:04.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:05.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:08.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:10.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:11.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:12.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:13.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.193Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:14.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:14.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:17.192Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:17.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:18.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.501Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.688Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:19.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.695Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:20.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:20.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:22.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:22.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:22.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:23.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:24.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:25.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:26.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:29.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:30.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:31.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:34.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:35.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:36.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:43.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.335Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:44.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:44.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:45.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:47.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.315Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ASHEKDNF9V3ZXJ1HZ7Y0W.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:28:47.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:47.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.548Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.712Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.719Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:50.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:50.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:51.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:52.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:52.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:52.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:54.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:55.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:55.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:28:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:57.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:28:59.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:00.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:01.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:02.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:04.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:05.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:06.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:08.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:10.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:13.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:14.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:14.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.087Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:17.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:17.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:19.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.718Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.896Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:20.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:20.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:22.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:22.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:22.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:25.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:25.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:25.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:26.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:27.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:28.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:29.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:30.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:31.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:34.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:35.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:36.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:37.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:38.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:41.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:43.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:44.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:44.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:47.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.316Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AVC1KF2V4W2BVK8DH34CM.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:29:47.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:47.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:48.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.492Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.643Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.650Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:50.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:50.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:52.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:52.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:52.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:55.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:56.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:29:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:58.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:29:59.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:00.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:01.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:03.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:04.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:05.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:08.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:10.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:13.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.304Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:14.391Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:14.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:17.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:17.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:18.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.721Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.904Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:20.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:20.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:20.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:20.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:20.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:21.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:22.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:22.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:22.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:24.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:26.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:28.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:29.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:30.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:31.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:32.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:34.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:35.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:38.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:39.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:42.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:43.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.175Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.267Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:44.362Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:44.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:47.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.316Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AX6MM16Z7WR1AZBXK1CZB.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:30:47.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:47.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:48.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:49.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.519Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.682Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.693Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:50.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:51.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:52.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:52.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:52.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:53.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:54.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:55.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:55.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:30:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:57.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:58.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:30:59.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:00.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:01.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:02.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:04.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:05.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:08.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:13.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.172Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:14.373Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:14.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:17.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:17.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.648Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.837Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.846Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:20.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:20.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:21.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:22.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:22.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:22.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:27.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:28.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:29.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:33.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:34.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:36.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:38.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:41.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.162Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:44.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:44.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.542Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:47.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.317Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AZ17NXTWYVJYG2YA3YY20.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:31:47.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:47.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:48.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.700Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:50.328Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:50.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:51.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:52.664Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:52.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:52.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:56.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:31:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:31:59.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:00.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:04.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:05.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:06.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:08.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:10.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.352Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:14.445Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:14.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:17.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:17.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.710Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.877Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.885Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:20.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:20.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:22.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:22.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:22.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:23.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:24.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:25.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:26.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:29.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:30.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:32.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:33.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:35.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:37.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:38.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:43.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.191Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:44.383Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:44.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:47.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.318Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B0VTPJQP2DQ4NGPD83Q4J.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:32:47.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:47.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.836Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.842Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:50.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:50.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:52.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:52.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:52.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:54.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:55.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:55.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:55.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:56.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:32:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:57.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:58.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:32:59.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:00.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:02.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:02.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:04.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:05.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:06.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:08.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:10.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:11.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:12.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.174Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:14.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:14.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:15.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:16.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:17.429Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:17.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.764Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.924Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.932Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:20.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:20.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:22.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:22.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:22.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:23.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:24.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:25.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:25.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:25.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:27.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:28.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:29.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:30.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:31.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:32.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:34.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:38.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.727Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:41.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:43.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.192Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:44.382Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:44.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:47.184Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.318Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B2PDP74WWE2V16M2MCQH1.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:33:47.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:47.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.511Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.666Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:50.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:50.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:52.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:52.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:52.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:55.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:55.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:56.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:33:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:33:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:00.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:04.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:05.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:07.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:08.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:11.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:14.421Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:14.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:17.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:17.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.608Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.807Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.820Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:20.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:20.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:22.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:22.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:22.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:24.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:25.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:26.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:28.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:29.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:30.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:31.477Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:34.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:35.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:37.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:38.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:41.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:43.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.169Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:44.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:44.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:47.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.319Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B4H0QX6SSAW8CG75K32WK.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:34:47.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:47.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:49.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.740Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.903Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.910Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:50.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:50.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:51.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:52.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:52.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:52.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:54.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:55.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:55.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:55.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:55.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:34:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:58.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:34:59.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:00.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:02.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:04.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:08.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.172Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:14.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:14.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:17.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:18.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:19.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.695Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.854Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.861Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:20.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:20.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:21.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:22.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:22.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:22.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:24.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:25.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:26.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:28.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:29.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:30.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:32.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:34.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:35.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:37.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:38.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:41.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:42.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:43.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.275Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:44.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:46.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:47.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.320Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B6BKRXJDPSAAMBV90R07D.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:35:47.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:47.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.713Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.861Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.873Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:50.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:50.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:51.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:52.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:53.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:54.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:55.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:55.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:55.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:35:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:58.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:35:59.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:00.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:01.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:03.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:04.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:05.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:08.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:11.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:13.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.299Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:14.478Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:14.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:17.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.571Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:19.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.729Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.737Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:20.185Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:20.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:20.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:22.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:22.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:22.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:25.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:25.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:27.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:29.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:30.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:30.597Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:31.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:34.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:35.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:38.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:41.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:43.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:44.406Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:44.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:47.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.321Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B866RZRKZGXKJSA4ERYHQ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:36:47.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.639Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.824Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.832Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:50.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:50.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:52.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:52.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:52.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:54.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:55.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:36:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:57.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:36:59.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:00.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:01.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:04.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:05.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:06.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:08.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:13.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.262Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:14.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:17.262Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:17.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.754Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:19.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:20.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:20.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:22.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:22.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:24.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:25.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:26.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:27.743Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:27.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:28.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:29.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:30.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:32.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:34.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:35.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:38.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:39.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.202Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:44.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:44.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:45.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:47.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.321Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BA0SS1TQ3ZZA5RWVRPXH2.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:37:47.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:47.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:48.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:49.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.629Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.793Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.802Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:50.218Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:51.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:52.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:52.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:52.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:53.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:54.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:55.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:56.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:37:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:58.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:37:59.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:00.589Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:00.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:01.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:02.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:04.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:08.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:12.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:13.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:14.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:14.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.090Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.463Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.466Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.557Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:17.385Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:17.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.695Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.861Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:20.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:20.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:21.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:22.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:22.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:22.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:23.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:24.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:25.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:25.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:26.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:27.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:28.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:29.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:30.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:31.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:34.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:35.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:36.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:38.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:43.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:44.394Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:44.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:47.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.323Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BBVCVMWNJPM6P9RHBTM3D.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:38:47.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:47.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.533Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.687Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.695Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:50.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:50.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:52.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:52.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:52.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:53.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:54.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:55.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:55.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:56.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:38:57.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:58.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:38:59.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:00.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:01.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:04.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:08.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:11.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:14.381Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:14.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:17.325Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:17.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:18.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.728Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.890Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:20.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:20.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:22.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:22.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:22.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:25.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:25.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:26.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:28.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:29.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:30.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:33.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:34.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:35.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:36.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:37.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:38.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:41.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.152Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.360Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:44.467Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:44.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:47.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.324Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BDNZV2XN10YRY99A6MS33.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:39:47.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:47.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.631Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.803Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:50.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:50.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:51.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:52.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:54.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:55.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:56.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:39:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:39:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:01.477Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:01.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:04.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:05.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:06.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:11.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:12.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:13.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:14.417Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:14.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:17.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:17.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:18.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.555Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.721Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.729Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:19.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:20.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:20.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:21.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:22.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:22.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:22.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:25.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:25.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:26.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:27.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:28.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:29.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:30.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:31.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:32.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:33.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:34.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:35.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:36.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:37.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:38.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:41.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:42.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:43.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:44.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:44.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:45.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:46.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:47.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.324Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BFGJWRH9E7BFX982VX0X5.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:40:47.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:47.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:49.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.672Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.834Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.843Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:50.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:52.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:52.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:52.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:53.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:54.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:55.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:55.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:55.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:55.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:56.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:40:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:58.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:40:59.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:00.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:01.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:04.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:05.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:07.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:08.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:14.384Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:14.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:17.273Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:18.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.666Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.827Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.839Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:20.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:21.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:22.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:22.630Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:22.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:22.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:23.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:25.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:26.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:27.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:29.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:30.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:34.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:35.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:37.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:38.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:42.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:43.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:44.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:44.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:47.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.325Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BHB5X8CZ42WJZK9HKZR3P.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:41:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:49.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.520Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.692Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.700Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:50.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:50.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:52.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:52.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:52.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:54.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:55.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:56.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:41:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:41:59.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:00.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:01.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:02.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:04.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:05.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:06.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:08.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:11.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:12.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.191Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:14.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:14.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:17.351Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:17.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:19.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.774Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.935Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:19.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:20.360Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:20.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:21.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:22.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:22.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:22.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:23.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:25.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:25.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:26.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:28.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:29.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:30.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:31.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:34.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:35.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:37.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:41.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:42.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:43.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.190Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:44.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:44.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:46.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:47.281Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.326Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BK5RYX242FQF8447CC7AT.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:42:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:47.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:48.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:49.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.677Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.838Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.846Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:50.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:50.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:52.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:54.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:55.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:55.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:56.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:42:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:57.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:58.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:42:59.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:00.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:02.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:08.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:10.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:11.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:12.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:13.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:14.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:14.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:17.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.608Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.785Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.793Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:19.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:20.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:20.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:22.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:22.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:22.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:23.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:24.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:25.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:26.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:29.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:30.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:32.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:34.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:35.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:36.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:37.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:38.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:40.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:43.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:44.446Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:44.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:47.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.327Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BN0BZT0B6QQTGKCY63WMH.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:43:47.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:47.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:49.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.599Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.752Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.760Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:50.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:50.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:52.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:52.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:52.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:54.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:55.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:55.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:55.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:56.019Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:56.020Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:56.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.751Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.751Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:43:57.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:57.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:58.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:43:59.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:00.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:02.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:04.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:05.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:07.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:08.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:11.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:14.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:17.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:17.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:18.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.811Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:19.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:20.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:20.392Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:20.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:22.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:22.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:22.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:25.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:25.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:26.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:28.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:29.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:30.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:34.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:35.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:36.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:38.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:41.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:42.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:43.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:44.381Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:44.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:47.170Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.328Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BPTZ09X6YYZB42N02YSVQ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:44:47.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.597Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.771Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:50.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:50.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:52.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:53.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:55.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:55.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:55.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:56.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:44:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:44:59.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:00.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:01.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:03.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:04.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:05.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:08.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:13.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.223Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.307Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:14.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:15.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:17.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:17.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:18.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:18.266Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:19.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.714Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.899Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.909Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:20.344Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:20.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:22.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:22.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:22.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:24.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:25.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:25.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:26.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:27.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:28.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:29.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:30.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:31.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:32.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:34.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:36.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:37.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:38.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:41.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:42.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:43.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.177Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.277Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:44.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:44.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:47.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.329Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BRNJ0CN40R10Y692V2DAN.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:45:47.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:47.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:48.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.610Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.777Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.786Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:50.185Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:50.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:52.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:52.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:52.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:53.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:54.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:55.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:55.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:55.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:45:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:58.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:45:59.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:01.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:04.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:07.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:08.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:11.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:13.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:14.384Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:14.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:15.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:17.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:17.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:18.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:19.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.739Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.908Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:20.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:20.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:22.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:22.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:22.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:26.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:28.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:29.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:30.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:34.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:35.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:37.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:38.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:44.405Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:44.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:47.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.329Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BTG51ZVHR7TYG3HC08PJ6.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:46:47.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:48.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.666Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.905Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.919Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:50.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:50.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:52.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:53.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:54.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:55.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:55.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:46:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:58.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:46:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:01.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:04.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:05.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:06.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:08.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:09.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:11.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:13.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:14.356Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:14.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:17.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:17.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:18.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.842Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.850Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:20.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:20.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:21.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:22.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:22.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:22.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:24.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:25.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:25.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:26.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:28.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:29.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:30.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:31.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:34.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:35.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:36.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:37.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:38.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:41.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:44.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:44.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:47.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.330Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BWAR1S5CQVX6HTVRGQXM7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:47:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:47.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.618Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.772Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.779Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:50.159Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:50.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:52.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:52.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:52.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:53.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:55.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:55.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:55.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:56.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:47:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:57.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:58.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:47:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:00.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:01.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:02.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:04.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:05.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:06.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:07.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:08.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:10.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:11.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:12.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:14.385Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:14.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:16.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:17.173Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:17.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.482Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.656Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.664Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:20.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:20.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:22.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:22.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:22.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:23.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:25.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:25.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:25.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:26.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.125Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:29.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:30.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:31.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:34.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:35.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:36.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:38.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:40.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:41.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:42.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:43.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.254Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:44.549Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:44.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:45.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:46.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:47.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.330Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BY5B2E5PZV9BRYVMJKR14.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:48:47.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:47.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:49.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.558Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.715Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.723Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:50.126Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:50.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:51.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:52.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:54.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:55.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:55.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:48:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:57.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:48:59.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:00.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:01.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:04.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:05.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:07.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:08.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:11.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:13.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.296Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:14.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:14.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:17.262Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:17.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.595Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.756Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.764Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:20.196Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:20.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:21.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:22.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:22.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:22.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:23.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:24.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:25.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:26.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:29.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:30.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:33.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:34.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:35.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:38.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:41.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:43.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:44.398Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:44.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:47.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.331Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BZZY38MRSQRK5DRG6RM5G.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:49:47.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:47.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:49.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.527Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.705Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.715Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:50.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:50.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:52.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:55.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:56.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:49:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:58.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:49:59.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:00.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:04.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:05.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:06.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:08.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:11.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:14.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:14.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:17.491Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:17.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:18.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:18.263Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:20.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:20.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:20.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:20.629Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:20.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:22.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:22.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:22.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:23.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:24.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:25.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:25.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:26.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:27.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:28.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:30.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:31.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:32.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:34.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:35.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:37.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:38.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:42.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:44.501Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:44.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:45.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:46.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:47.175Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.332Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C1TH4BR60BVNK5TN3JR11.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:50:47.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:47.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:48.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.520Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.682Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.689Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:50.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:50.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:52.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:52.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:53.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:54.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:55.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:55.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:55.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:56.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:50:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:58.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:50:59.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:00.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:04.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:05.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:08.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:11.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:13.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:14.441Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:14.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:17.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:17.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:18.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:18.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.539Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:19.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.690Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.703Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:20.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:20.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:22.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:22.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:22.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:24.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:25.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:25.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:26.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:28.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:29.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:30.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:31.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:34.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:35.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:36.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:38.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:41.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.298Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:44.401Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:44.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:45.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:47.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.333Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C3N45HCC4329EG0RN08RX.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:51:47.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:47.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:49.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.608Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.768Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.777Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:50.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:50.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:51.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:52.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:52.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:52.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:53.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:55.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:55.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:55.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:56.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:51:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:51:59.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:00.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:04.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:08.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:10.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:11.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:13.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:14.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:14.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:17.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:17.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:18.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.640Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.800Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.809Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:20.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:20.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:22.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:22.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:22.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:24.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:25.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:26.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:27.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:28.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:29.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:30.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:34.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:35.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:38.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:43.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:44.400Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:47.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.334Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C5FQ5D90RS7W1VZERH410.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:52:47.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:47.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:48.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.543Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.719Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.727Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:50.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:50.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:50.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:52.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:55.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:55.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:55.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:56.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:52:57.750Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:58.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:52:59.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:00.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:01.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:04.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:05.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:06.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:07.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:08.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:11.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:12.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:14.420Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:14.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:15.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:17.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:17.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:18.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.434Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.585Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.594Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:19.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:19.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:20.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:22.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:22.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:22.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:23.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:24.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:25.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:25.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:26.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:27.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:28.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:29.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:30.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:31.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:33.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:34.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:34.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:36.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:37.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:38.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:44.357Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:44.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:46.545Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:46.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:47.000Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:47.330Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.334Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C7AA6PEGRXWM59BYAY7WJ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:53:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:47.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.680Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.844Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.853Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:50.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:50.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:52.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:52.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:52.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:53.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:55.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:56.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:53:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:57.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:58.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:53:59.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:00.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:01.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:03.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:04.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:05.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:06.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:08.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.304Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:14.396Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:14.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:17.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:17.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:18.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.641Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.804Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:20.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:20.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:21.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:22.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:22.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:22.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:24.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:25.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:25.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:26.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:27.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:28.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:29.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:30.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:31.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:34.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:38.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:40.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:41.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:42.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:44.416Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:44.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:47.184Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.335Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C94X79N1W9VE7X46EGCW4.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:54:47.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:47.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.525Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.690Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:50.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:50.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:51.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:52.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:52.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:52.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:54.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:55.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:55.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:55.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:54:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:57.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:58.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:54:59.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:00.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:01.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:04.303Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:04.303Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:04.304Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:04.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:05.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:08.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:14.424Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:14.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:17.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:17.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:18.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.649Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:19.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.818Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.827Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:20.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:20.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:21.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:22.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:22.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:22.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:23.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:24.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:25.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:25.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:25.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:26.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:28.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:29.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:30.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:31.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:34.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:35.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:38.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:41.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:42.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.106Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.300Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:44.393Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:44.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:47.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.336Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CAZG7PMW1JPQPMYPTC7C7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:55:47.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:47.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:48.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:49.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.489Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.635Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.645Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:50.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:50.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:51.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:52.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:53.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:54.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:55.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:55.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:55.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:56.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:55:57.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:57.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:58.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:55:59.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:00.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:04.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:05.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:06.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:08.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:13.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.299Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:14.418Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:14.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:17.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:17.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.567Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.739Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.748Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:20.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:20.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:21.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:22.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:22.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:22.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:25.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:25.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:26.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:27.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:28.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:29.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:30.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:34.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:35.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:36.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:37.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:41.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:42.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:43.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.306Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.416Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:44.530Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:44.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:45.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.468Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.470Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.764Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.336Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CCT38P5GEHQWD48FB3JB7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:56:47.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:47.636Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:48.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:49.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:50.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:50.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:50.198Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:50.610Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:50.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:52.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:55.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:55.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:56.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:56:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:57.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:58.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:56:59.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:00.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:02.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:04.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:05.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:06.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:07.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:08.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:09.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:11.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:12.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:13.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:14.425Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:14.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:17.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:17.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.658Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.809Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.818Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:20.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:20.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:21.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:22.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:22.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:22.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:24.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:25.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:25.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:26.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.773Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:27.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:27.775Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:28.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:29.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:34.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:35.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:36.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:38.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:41.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.273Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:44.392Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:44.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:47.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.337Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CEMP9APS18K5PG7J9PHVG.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:57:47.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:47.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:48.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.588Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.751Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.760Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:50.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:50.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:51.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:52.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:55.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:55.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:55.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:55.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:56.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:57:57.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:58.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:57:59.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:02.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:04.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:05.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:06.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:07.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:09.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:12.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:13.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:14.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:14.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:16.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:17.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:17.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:18.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.672Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.839Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.851Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:20.225Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:20.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:21.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:22.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:22.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:22.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:23.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:24.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:25.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:25.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:26.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:27.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:28.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:29.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:33.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:34.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:35.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:36.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:41.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:43.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.362Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:44.500Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:44.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:45.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:46.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:47.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.338Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CGF9A04DEF3G6PHDD6RBH.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:58:47.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:47.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:49.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.532Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.679Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.687Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:50.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:50.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:52.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:52.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:52.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:54.514Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:54.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:55.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:55.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:56.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:58:57.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:58.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:58:59.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:00.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:04.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:05.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:07.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:08.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:13.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.298Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:14.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:14.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:15.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:17.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:17.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:18.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.569Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.737Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.749Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:20.158Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:20.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:21.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:22.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:22.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:22.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:24.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:26.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:27.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:28.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:29.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:30.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:34.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:35.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:36.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:37.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:38.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:41.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.330Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:44.460Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:44.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:47.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.339Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CJ9WBGGD7HZJ58Y118S9P.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T09:59:47.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:48.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:48.265Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:49.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.885Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:50.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:50.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:50.463Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:50.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:52.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:54.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:55.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:55.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:55.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:56.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T09:59:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:58.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T09:59:59.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:00.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:06.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:07.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:08.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:10.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:10.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:11.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:13.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:14.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:16.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:17.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:17.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:18.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.655Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.819Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.828Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:20.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:20.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:21.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:22.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:22.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:22.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:22.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:24.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:25.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:25.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:28.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:29.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:30.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:34.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:36.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:37.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:38.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:41.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:42.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:43.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.106Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.138Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:44.464Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:45.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:46.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:47.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.339Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CM4FB9W6MP5MH3WXYVT44.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:00:47.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:47.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.474Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.633Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.647Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:50.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:50.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:52.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:52.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:52.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:53.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:54.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:55.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:55.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:55.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:55.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:56.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.764Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:00:57.765Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:58.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:00:59.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:00.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:05.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:08.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:11.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:12.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:13.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.324Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:14.420Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:17.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:17.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:18.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:18.267Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:19.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.842Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.851Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:20.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:20.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:20.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:20.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:20.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:21.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:22.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:22.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:22.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:25.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:25.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:26.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:28.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:29.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:30.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:31.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:34.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:36.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:38.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:40.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:41.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.184Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.283Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:44.392Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:44.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.463Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:47.233Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.341Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CNZ2CVB5PFKEPHZ7H0GER.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:01:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:47.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:48.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.827Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.836Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:50.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:50.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:52.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:52.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:52.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:53.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:54.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:55.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:55.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:56.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:01:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:58.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:01:59.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:00.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:01.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:02.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:04.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:05.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:06.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:07.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:08.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:11.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:13.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:14.393Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:14.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:17.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:17.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.594Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.772Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.780Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:20.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:20.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:22.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:22.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:22.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:24.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:25.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.750Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:27.750Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:27.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:29.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:30.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:34.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:35.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:36.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:37.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:38.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:40.986Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:41.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:43.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.252Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:44.475Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:44.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:46.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:46.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:47.210Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.341Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CQSNDTH79DQESKPBQABE7.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:02:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:47.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:48.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:49.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.542Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.704Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.715Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:50.177Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:50.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:50.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:52.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:54.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:55.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:55.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:55.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:02:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:58.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:02:59.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:01.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:02.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:04.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:05.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:07.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:08.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:10.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:11.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:12.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:13.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:14.430Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:14.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:17.200Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:17.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:18.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.557Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.707Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.716Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:20.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:20.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:22.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:22.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:23.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:24.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:25.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:25.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:25.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:26.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:27.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:27.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:28.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:29.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:30.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:31.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:34.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:36.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:37.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:38.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:40.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:41.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:43.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:44.458Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:44.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:46.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:47.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.342Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CSM8E3M58MWZYEJXRTKYP.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:03:47.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:47.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.514Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.661Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.670Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:50.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:50.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:51.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:52.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:52.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:52.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:54.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:55.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:55.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:55.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:56.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.744Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:03:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:57.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:58.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:03:59.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:00.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:01.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:04.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:05.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:06.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:07.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:08.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:11.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.199Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:14.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:14.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:17.166Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:17.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.553Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.712Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.724Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:20.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:20.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:22.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:22.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:22.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:23.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:24.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:27.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:29.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:30.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:31.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:34.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:36.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:37.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:38.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:41.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:43.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.354Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.516Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:44.648Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:44.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.463Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.464Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:47.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.343Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CVEVEMEW4EXA4M5GDM4VQ.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:04:47.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:47.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:49.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.515Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.635Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.826Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:50.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:50.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:51.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:52.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:52.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:52.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:53.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:54.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:55.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:55.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:04:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:58.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:04:59.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:00.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:04.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:05.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:06.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:07.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:08.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:10.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.244Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:14.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:14.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:17.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:17.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:18.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.555Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.710Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:20.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:20.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:22.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:22.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:22.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:22.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:23.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:25.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:25.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:25.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:26.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:27.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:29.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:30.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:31.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:32.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:34.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:36.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:38.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:41.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:42.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:43.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.157Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:44.480Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:44.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:47.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.343Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CX9EFGB5X0HNMKBJRJ2E5.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:05:47.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:47.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:48.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.509Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.660Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.670Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:50.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:50.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:52.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:52.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:52.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:53.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:54.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:55.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:55.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:55.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:55.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:56.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:05:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:58.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:05:59.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:00.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:01.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:02.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:03.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:04.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:05.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:06.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:08.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:12.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:14.429Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:14.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:17.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:17.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:18.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.898Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:20.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:20.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:22.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:22.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:23.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:23.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:24.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:26.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:28.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:29.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:30.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:38.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:40.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:43.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:44.410Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:44.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.545Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.585Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.588Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.346Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8CZ41JXC6BK1QCPGA2BVZE.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:06:47.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:47.684Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:47.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:50.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:50.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:50.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:50.650Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:50.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:51.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:52.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:53.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:54.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:55.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:55.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:56.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:06:57.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:58.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:06:59.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:01.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:04.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:05.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:07.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:08.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:11.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:12.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:14.460Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:14.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:17.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:17.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:18.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:18.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.585Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.736Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.746Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:20.157Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:20.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:22.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:22.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:22.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:25.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:26.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:28.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:29.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:30.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:34.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:35.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:37.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:38.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:39.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:42.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:43.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:44.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:44.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:45.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:47.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.347Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D0YMJC02YCWEAQ478X2WB.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:07:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:47.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:48.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:49.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.595Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.605Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:50.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:50.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:51.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:52.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:52.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:53.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:54.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:55.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:55.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:55.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:55.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:56.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:07:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:58.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:07:59.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:00.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:04.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:05.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:07.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:09.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:10.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:13.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.142Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:14.412Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:14.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:15.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:17.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:17.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:18.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:18.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.676Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.844Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.857Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:20.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:20.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:21.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:22.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:22.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:22.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:24.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:25.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:27.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:28.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:29.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:30.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:32.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:33.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:34.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:35.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:37.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:38.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:40.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:41.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:42.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.344Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:44.443Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:44.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:45.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:46.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:47.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.348Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D2S7KRA9B2K842JMNSN4K.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:08:47.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:47.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:48.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.490Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.640Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.649Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:50.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:50.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:51.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:52.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:53.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:54.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:55.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:55.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:55.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:08:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:58.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:08:59.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:00.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:01.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:05.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:07.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:08.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:10.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:11.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:12.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.159Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.458Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:14.556Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:16.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:17.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:17.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:18.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.610Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.772Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:20.169Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:20.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:21.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:22.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:22.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:23.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:24.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:27.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:28.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:29.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:34.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:35.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:36.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:38.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:40.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:41.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:43.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.284Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.395Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:44.505Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:44.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:46.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:47.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.358Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D4KTX9B1B1C1QGT2KS7R4.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:09:47.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:47.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:49.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.836Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:49.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:50.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:50.403Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:50.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:51.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:52.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:52.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:52.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:55.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:55.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:55.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:56.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:09:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:57.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:58.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:09:59.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:04.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:05.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:08.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:10.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:11.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:12.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:13.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.154Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.399Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:14.502Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:14.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:17.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:17.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.616Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.774Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.783Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:20.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:20.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:22.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:22.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:22.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:22.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:24.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:25.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:25.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:25.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:26.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:27.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:28.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:29.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:30.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:31.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:32.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:33.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:34.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:35.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:37.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:38.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.728Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:40.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:44.429Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:44.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:46.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:47.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.359Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D6EDYYTZD49TC937D7E1C.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:10:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:47.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:48.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:49.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.815Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:49.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:50.386Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:50.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:52.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:52.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:52.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:54.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:55.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:55.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:10:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:58.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:10:59.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:00.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:01.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:04.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:05.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:06.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:08.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:09.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:11.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:12.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:13.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.396Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:14.513Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:14.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:16.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:17.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:17.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:18.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:18.268Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.572Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.729Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.739Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:20.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:20.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:22.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:22.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:22.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:23.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:24.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:25.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:25.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:25.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:26.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:28.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:29.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:30.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:34.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:35.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:36.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:38.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:40.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:41.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:42.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:43.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.309Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:44.434Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:44.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:45.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:47.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.360Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8D8910CPMK155537EQH0VX.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:11:47.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:47.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:48.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:48.263Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:49.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.495Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.650Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.659Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:50.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:50.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:51.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:52.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:52.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:52.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:54.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:55.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:55.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:55.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:55.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:56.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:11:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:57.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:11:59.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:00.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:01.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:02.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:04.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:05.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:06.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:07.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:08.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:11.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:13.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:14.421Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:14.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:17.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:17.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.502Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.661Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.671Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:20.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:20.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:21.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:22.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:22.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:22.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:23.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:24.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:25.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:25.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:27.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:29.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:30.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:34.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:35.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:36.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:38.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:40.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:41.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:44.408Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:44.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:45.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:47.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.361Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DA3M1FX78G4KDHGZH9VA8.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:12:47.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:47.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:48.259Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:49.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.547Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.713Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.724Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:50.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:50.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:51.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:52.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:52.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:52.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:53.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:53.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:54.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:55.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:12:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:58.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:12:59.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:00.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:04.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:05.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:08.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:09.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:11.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:13.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.332Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:14.423Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:14.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:17.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:17.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:18.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.650Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:19.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.806Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:20.192Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:20.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:22.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:22.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:24.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:25.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:25.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:27.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:28.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:29.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:30.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:32.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:34.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:35.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:36.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:37.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:40.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:42.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.126Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:44.427Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:44.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:46.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:47.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.362Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DBY72KHFCYVB0M0N4ZRZY.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:13:47.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:47.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:48.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:49.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.513Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.665Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.673Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:50.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:50.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:51.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:52.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:52.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:52.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:53.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:54.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:55.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:55.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:56.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:13:57.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:58.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:13:59.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:00.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:01.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:04.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:05.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:06.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:07.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:08.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.364Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:10.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:12.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:13.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:14.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:14.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:17.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:17.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:18.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.749Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.906Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.916Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:20.338Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:20.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:21.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:22.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:22.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:22.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:25.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:25.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:26.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.112Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:28.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:29.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:30.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:30.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:32.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:35.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:36.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:37.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.741Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:41.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:43.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.385Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:44.498Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:44.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:47.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.363Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DDRT3GSC5KEEFFYWXM04H.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:14:47.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:47.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:48.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:49.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.673Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.833Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.843Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:50.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:50.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:50.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:50.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:50.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:50.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:51.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:52.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:53.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:54.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:55.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:55.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:55.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:55.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:14:57.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:57.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:58.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:14:59.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:00.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:01.477Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:01.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:04.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:05.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:07.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:08.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:10.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.746Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:11.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:13.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:13.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.339Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:14.563Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.732Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:14.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:16.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:17.139Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:17.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:18.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.586Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.779Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.791Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:20.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:20.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:21.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:22.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:22.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:22.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:23.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:24.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:25.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:25.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:25.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:27.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:28.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:29.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:30.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:31.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:35.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:36.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:37.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:38.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.730Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:40.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:41.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:43.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.152Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.273Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.402Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:44.536Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:44.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:47.178Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.364Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DFKD4PAX12NYN1QTP9W97.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:15:47.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:48.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:49.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.489Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.643Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.656Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:50.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:50.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:50.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:50.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:50.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:51.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.634Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.635Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:52.636Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:52.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:52.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:53.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:54.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:55.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:55.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:55.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:56.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:15:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:57.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:58.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:15:59.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:00.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:04.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:05.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:07.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:08.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:10.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:11.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:12.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:13.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:14.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:14.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:16.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:17.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:17.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:18.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.547Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.697Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.711Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:20.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:20.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:21.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.636Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.637Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:22.637Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:22.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:22.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:22.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:23.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:24.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:25.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:25.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:25.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:27.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:28.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:30.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:33.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:35.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:36.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:37.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:38.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:39.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:40.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:41.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:42.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:43.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.154Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.161Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.461Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.600Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:44.715Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:44.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:45.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:46.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:47.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.365Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DHE04RW1BED9QWMA7GF8S.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:16:47.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:47.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:48.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.712Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.866Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:50.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:50.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:51.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:52.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:52.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:52.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:53.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:54.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:55.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:55.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:55.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:55.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:55.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.047Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.773Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:16:57.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:57.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:58.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:16:59.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:00.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:00.585Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:01.743Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:02.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:03.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:04.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:05.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:06.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:08.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:09.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:10.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:11.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:14.450Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:14.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:15.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:17.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:17.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:19.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.720Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.877Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.887Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:20.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:20.273Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:20.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:20.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:20.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:21.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:22.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:22.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:22.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:23.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:24.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:25.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:25.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:25.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:25.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:26.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.750Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:27.751Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:27.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:28.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:29.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:30.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:31.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:32.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:33.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:34.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:35.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:36.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:37.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:40.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:41.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:42.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:43.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:43.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:43.979Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.330Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.454Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:44.559Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:44.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:45.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:47.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.365Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DK8K5HNF67JNGTGFQSTX0.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:17:47.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:47.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:48.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:49.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.843Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:50.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:50.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:50.460Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:50.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:51.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:52.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:52.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:53.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:53.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:54.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:55.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:55.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:55.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:55.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.116Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.120Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:17:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:57.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:58.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:17:59.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:00.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.415Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:01.525Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:04.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:05.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:06.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:08.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:10.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:11.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.174Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:14.572Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:14.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:16.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:17.332Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:17.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:18.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:18.265Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:19.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.821Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:19.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:20.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:20.522Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:20.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:21.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.638Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.638Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:22.639Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:22.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:22.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:23.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:24.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:25.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:25.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:25.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:25.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:26.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:27.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:27.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:28.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:29.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:30.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:31.476Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:31.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:32.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:34.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:35.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:36.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:37.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:38.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:39.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:40.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:42.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:43.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.161Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.272Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:44.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:45.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:47.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.367Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DN3663ANNDD8M3Z55AMTD.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:18:47.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:47.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:48.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:49.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.642Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.814Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.824Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:49.939Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:50.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:50.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:51.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.645Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.645Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:52.646Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:52.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:52.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:52.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:53.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:54.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:55.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:55.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:55.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:55.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:56.013Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:56.014Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:56.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.117Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:18:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:57.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:58.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:18:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:00.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:01.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:04.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:05.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:06.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:07.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:08.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:10.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:11.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:12.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:13.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.156Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.372Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:14.478Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:16.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:17.194Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:17.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:18.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.544Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:19.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.703Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.713Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:20.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:20.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:21.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.638Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.639Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:22.640Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:22.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:22.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:22.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:23.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:24.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:25.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:25.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:25.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:25.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:27.742Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:27.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:28.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:29.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:30.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:31.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:31.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:32.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:33.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:34.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:35.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:36.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:37.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:38.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:39.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:40.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:41.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:42.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:43.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.344Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.457Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:44.569Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:44.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:45.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:46.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:47.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.368Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8DPXS8N7VBZCXCGC0ZQ092.tmp-for-creation: no space left on device"
level=error ts=2022-10-13T10:19:47.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:47.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:48.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:49.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.656Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.857Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.872Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:50.332Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:50.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:51.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:52.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:52.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:53.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:53.238Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:53.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:54.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:55.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:55.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:55.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:55.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:55.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.118Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:19:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:57.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:58.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:19:59.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.507Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:00.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:01.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.473Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.474Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:01.475Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:02.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:04.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:06.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.069Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:09.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:10.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:11.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:12.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:13.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:14.437Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:14.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:15.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:16.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:16.992Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:17.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:17.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:18.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:19.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:19.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:19.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:19.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.896Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:20.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=warn ts=2022-10-13T10:20:20.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:20.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:20.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:20.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:20.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:20.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:21.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
level=error ts=2022-10-13T10:20:21.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device"
<----end of log for "prometheus-k8s-1"/"prometheus"

Oct 13 10:20:21.491: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c config-reloader -n openshift-monitoring'
Oct 13 10:20:21.653: INFO: Log for pod "prometheus-k8s-1"/"config-reloader"
---->
level=info ts=2022-10-11T16:46:41.440325465Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=fc23b05)"
level=info ts=2022-10-11T16:46:41.440402435Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221006-18:49:18)"
level=info ts=2022-10-11T16:46:41.440562307Z caller=main.go:183 msg="Starting web server for metrics" listen=localhost:8080
level=info ts=2022-10-11T16:46:41.922315745Z caller=reloader.go:355 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
level=info ts=2022-10-11T16:46:41.92244225Z caller=reloader.go:219 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
<----end of log for "prometheus-k8s-1"/"config-reloader"

Oct 13 10:20:21.653: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c thanos-sidecar -n openshift-monitoring'
Oct 13 10:20:21.826: INFO: Log for pod "prometheus-k8s-1"/"thanos-sidecar"
---->
level=info ts=2022-10-11T16:46:48.417552874Z caller=sidecar.go:106 msg="no supported bucket was configured, uploads will be disabled"
level=info ts=2022-10-11T16:46:48.417710687Z caller=options.go:28 protocol=gRPC msg="enabling server side TLS"
level=info ts=2022-10-11T16:46:48.418305737Z caller=options.go:58 protocol=gRPC msg="server TLS client verification enabled"
level=info ts=2022-10-11T16:46:48.418898679Z caller=sidecar.go:326 msg="starting sidecar"
level=info ts=2022-10-11T16:46:48.419155574Z caller=reloader.go:183 component=reloader msg="nothing to be watched"
level=info ts=2022-10-11T16:46:48.419226301Z caller=intrumentation.go:48 msg="changing probe status" status=ready
level=info ts=2022-10-11T16:46:48.419493828Z caller=intrumentation.go:60 msg="changing probe status" status=healthy
level=info ts=2022-10-11T16:46:48.419572419Z caller=http.go:63 service=http/server component=sidecar msg="listening for requests and metrics" address=127.0.0.1:10902
level=info ts=2022-10-11T16:46:48.420863305Z caller=grpc.go:123 service=gRPC/server component=sidecar msg="listening for serving gRPC" address=[10.128.23.35]:10901
level=info ts=2022-10-11T16:46:48.420981099Z caller=tls_config.go:191 service=http/server component=sidecar msg="TLS is disabled." http2=false
level=info ts=2022-10-11T16:46:48.421892741Z caller=sidecar.go:166 msg="successfully loaded prometheus version"
level=info ts=2022-10-11T16:46:48.524456417Z caller=sidecar.go:188 msg="successfully loaded prometheus external labels" external_labels="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-1\"}"
level=info ts=2022-10-11T16:46:48.524535652Z caller=intrumentation.go:48 msg="changing probe status" status=ready
<----end of log for "prometheus-k8s-1"/"thanos-sidecar"

Oct 13 10:20:21.826: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c prometheus-proxy -n openshift-monitoring'
Oct 13 10:20:22.007: INFO: Log for pod "prometheus-k8s-1"/"prometheus-proxy"
---->
2022/10/11 16:46:48 provider.go:128: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s
2022/10/11 16:46:48 provider.go:133: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2022/10/11 16:46:48 provider.go:351: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2022/10/11 16:46:48 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"
2022/10/11 16:46:48 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s
2022/10/11 16:46:48 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled
2022/10/11 16:46:48 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth
I1011 16:46:48.726464       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key
2022/10/11 16:46:48 http.go:107: HTTPS: listening on [::]:9091
<----end of log for "prometheus-k8s-1"/"prometheus-proxy"

Oct 13 10:20:22.008: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c kube-rbac-proxy -n openshift-monitoring'
Oct 13 10:20:22.226: INFO: Log for pod "prometheus-k8s-1"/"kube-rbac-proxy"
---->
I1011 16:46:48.855111       1 main.go:151] Reading config file: /etc/kube-rbac-proxy/config.yaml
I1011 16:46:48.857887       1 main.go:181] Valid token audiences: 
I1011 16:46:48.858006       1 main.go:305] Reading certificate files
I1011 16:46:48.858061       1 reloader.go:98] reloading key /etc/tls/private/tls.key certificate /etc/tls/private/tls.crt
I1011 16:46:48.858317       1 main.go:339] Starting TCP socket on 0.0.0.0:9092
I1011 16:46:48.858690       1 main.go:346] Listening securely on 0.0.0.0:9092
<----end of log for "prometheus-k8s-1"/"kube-rbac-proxy"

Oct 13 10:20:22.226: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c prom-label-proxy -n openshift-monitoring'
Oct 13 10:20:22.411: INFO: Log for pod "prometheus-k8s-1"/"prom-label-proxy"
---->
2022/10/11 16:46:56 Listening insecurely on 127.0.0.1:9095
<----end of log for "prometheus-k8s-1"/"prom-label-proxy"

Oct 13 10:20:22.411: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-1 -c kube-rbac-proxy-thanos -n openshift-monitoring'
Oct 13 10:20:22.609: INFO: Log for pod "prometheus-k8s-1"/"kube-rbac-proxy-thanos"
---->
I1011 16:46:56.917996       1 main.go:181] Valid token audiences: 
I1011 16:46:56.918574       1 main.go:305] Reading certificate files
I1011 16:46:56.918732       1 dynamic_cafile_content.go:167] Starting client-ca::/etc/tls/client/client-ca.crt
I1011 16:46:56.918883       1 main.go:339] Starting TCP socket on [10.128.23.35]:10902
I1011 16:46:56.919101       1 main.go:346] Listening securely on [10.128.23.35]:10902
<----end of log for "prometheus-k8s-1"/"kube-rbac-proxy-thanos"

fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:83]: Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        {
            s: "promQL query returned unexpected results:\nopenshift_build_total{phase=\"Complete\"} >= 0\n[]",
        },
    ]
    promQL query returned unexpected results:
    openshift_build_total{phase="Complete"} >= 0
    []
occurred

Stderr
_sig-builds__Feature_Builds__build_without_output_image__building_from_templates_should_create_an_image_from_a_S2i_template_without_an_output_image_reference_defined__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 69.0s

_sig-auth__Feature_SecurityContextConstraints___TestAllowedSCCViaRBAC__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 12.2s

_sig-builds__Feature_Builds__build_have_source_revision_metadata__started_build_should_contain_source_revision_information__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 88.0s

_sig-builds__Feature_Builds__prune_builds_based_on_settings_in_the_buildconfig__should_prune_builds_after_a_buildConfig_change__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 21.6s

_sig-builds__Feature_Builds__volumes__build_volumes__should_mount_given_secrets_and_configmaps_into_the_build_pod_for_docker_strategy_builds__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 178.0s

_sig-imageregistry__Feature_Image__signature_TestImageRemoveSignature__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.6s

_sig-auth__Feature_OpenShiftAuthorization__RBAC_proxy_for_openshift_authz__RunLegacyLocalRoleBindingEndpoint_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.6s

_sig-devex__Feature_OpenShiftControllerManager__TestDockercfgTokenDeletedController__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 13.3s

_sig-devex__Feature_Templates__templateinstance_cross-namespace_test_should_create_and_delete_objects_across_namespaces__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.5s

_sig-builds__Feature_Builds__oc_new-app__should_fail_with_a_--name_longer_than_58_characters__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 18.1s

_sig-builds__Feature_Builds__build_without_output_image__building_from_templates_should_create_an_image_from_a_docker_template_without_an_output_image_reference_defined__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 104.0s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestSimpleImageChangeBuildTriggerFromImageStreamTagCustom__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.5s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_rolled_back_should_rollback_to_an_older_deployment__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 214.0s

_sig-network__Feature_Router__The_HAProxy_router_should_serve_the_correct_routes_when_running_with_the_haproxy_config_manager__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.1s

Skipped: skip [github.com/openshift/origin/test/extended/router/config_manager.go:56]: TODO: This test is flaking, fix it
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:16:03.931: INFO: configPath is now "/tmp/configfile2328888411"
Oct 13 10:16:03.931: INFO: The user is now "e2e-test-router-config-manager-5zrgf-user"
Oct 13 10:16:03.931: INFO: Creating project "e2e-test-router-config-manager-5zrgf"
Oct 13 10:16:04.111: INFO: Waiting on permissions in project "e2e-test-router-config-manager-5zrgf" ...
Oct 13 10:16:04.133: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:16:04.246: INFO: Waiting for service account "default" secrets (default-token-76sdg) to include dockercfg/token ...
Oct 13 10:16:04.344: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:16:04.452: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:16:04.559: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:16:04.570: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:16:04.581: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:16:05.142: INFO: Project "e2e-test-router-config-manager-5zrgf" has been fully provisioned.
[BeforeEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/router/config_manager.go:44
Oct 13 10:16:05.152: INFO: Running 'oc --namespace=e2e-test-router-config-manager-5zrgf --kubeconfig=.kube/config new-app -f /tmp/fixture-testdata-dir1463401029/test/extended/testdata/router/router-config-manager.yaml -p IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21f5f7b1a13cd469283f14345b3fa0a710ccf1e2437d738b21f7f4ecf5611384'
W1013 10:16:05.273878   76228 shim_kubectl.go:55] Using non-groupfied API resources is deprecated and will be removed in a future release, update apiVersion to "template.openshift.io/v1" for your resource
--> Deploying template "e2e-test-router-config-manager-5zrgf/" for "/tmp/fixture-testdata-dir1463401029/test/extended/testdata/router/router-config-manager.yaml" to project e2e-test-router-config-manager-5zrgf

     * With parameters:
        * IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21f5f7b1a13cd469283f14345b3fa0a710ccf1e2437d738b21f7f4ecf5611384

--> Creating resources ...
    pod "router-haproxy-cfgmgr" created
    rolebinding.authorization.openshift.io "system-router" created
    route.route.openshift.io "edge-blueprint" created
    route.route.openshift.io "reencrypt-blueprint" created
    route.route.openshift.io "passthrough-blueprint" created
    configmap "serving-cert" created
    pod "insecure-endpoint" created
    pod "secure-endpoint" created
    service "insecure-service" created
    service "secure-service" created
    route.route.openshift.io "insecure-route" created
    route.route.openshift.io "edge-allow-http-route" created
    route.route.openshift.io "reencrypt-route" created
    route.route.openshift.io "passthrough-route" created
--> Success
    Access your application via route 'edge.blueprint.hapcm.test' 
    Access your application via route 'reencrypt.blueprint.hapcm.test' 
    Access your application via route 'passthrough.blueprint.hapcm.test' 
    Access your application via route 'insecure.hapcm.test' 
    Access your application via route 'edge.allow.hapcm.test' 
    Access your application via route 'reencrypt.hapcm.test' 
    Access your application via route 'passthrough.hapcm.test' 
    Run 'oc status' to view your app.
[It] should serve the correct routes when running with the haproxy config manager [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/router/config_manager.go:55
[AfterEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/router/config_manager.go:32
[AfterEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:16:06.281: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-router-config-manager-5zrgf-user}, err: <nil>
Oct 13 10:16:06.309: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-router-config-manager-5zrgf}, err: <nil>
Oct 13 10:16:06.325: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~jrSl7rBcqjEHodqatDgjlZrPBVeaJNCo5HWixgY_PnM}, err: <nil>
[AfterEach] [sig-network][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-router-config-manager-5zrgf" for this suite.
skip [github.com/openshift/origin/test/extended/router/config_manager.go:56]: TODO: This test is flaking, fix it

Stderr
_sig-apps__Feature_OpenShiftControllerManager__TestTriggers_MultipleICTs__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 32.5s

_sig-network-edge__DNS_should_answer_A_and_AAAA_queries_for_a_dual-stack_service__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.4s

Skipped: skip [github.com/openshift/origin/test/extended/dns/dns.go:486]: skipping test on non dual-stack enabled platform
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network-edge] DNS
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename dns
W1013 10:16:01.294638   76026 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 13 10:16:01.295: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network-edge] DNS
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network-edge] DNS
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:16:01.547: INFO: configPath is now "/tmp/configfile1607664237"
Oct 13 10:16:01.547: INFO: The user is now "e2e-test-dns-dualstack-clrqk-user"
Oct 13 10:16:01.547: INFO: Creating project "e2e-test-dns-dualstack-clrqk"
Oct 13 10:16:01.865: INFO: Waiting on permissions in project "e2e-test-dns-dualstack-clrqk" ...
Oct 13 10:16:01.873: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:16:01.987: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:16:02.099: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:16:02.226: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:16:02.250: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:16:02.256: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:16:02.845: INFO: Project "e2e-test-dns-dualstack-clrqk" has been fully provisioned.
[It] should answer A and AAAA queries for a dual-stack service [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/dns/dns.go:465
[AfterEach] [sig-network-edge] DNS
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
STEP: Destroying namespace "e2e-dns-5139" for this suite.
[AfterEach] [sig-network-edge] DNS
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:16:02.924: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-dns-dualstack-clrqk-user}, err: <nil>
Oct 13 10:16:02.956: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-dns-dualstack-clrqk}, err: <nil>
Oct 13 10:16:02.977: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~ttOBXtlCmuwhhqVaCghFWltCmyMhFnm49WnxUzbQfN0}, err: <nil>
[AfterEach] [sig-network-edge] DNS
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-dns-dualstack-clrqk" for this suite.
skip [github.com/openshift/origin/test/extended/dns/dns.go:486]: skipping test on non dual-stack enabled platform

Stderr
_sig-cli__oc_adm_storage-admin__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.8s

_sig-arch__ocp_payload_should_be_based_on_existing_source_OLM_version_should_contain_the_source_commit_id__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.8s

_sig-builds__Feature_Builds__s2i_build_with_a_quota__Building_from_a_template_should_create_an_s2i_build_with_a_quota_and_run_it__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 169.0s

_sig-devex__Feature_Templates__templateinstance_impersonation_tests_should_pass_impersonation_creation_tests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 9.7s

_sig-imageregistry__Feature_ImageLayers__Image_layer_subresource_should_identify_a_deleted_image_as_missing__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.6s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_isolates_namespaces_by_default_should_prevent_communication_between_pods_in_different_namespaces_on_the_same_node__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.5s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:15:50.322: INFO: configPath is now "/tmp/configfile1190027739"
Oct 13 10:15:50.322: INFO: The user is now "e2e-test-ns-global-sszjj-user"
Oct 13 10:15:50.322: INFO: Creating project "e2e-test-ns-global-sszjj"
Oct 13 10:15:50.552: INFO: Waiting on permissions in project "e2e-test-ns-global-sszjj" ...
Oct 13 10:15:50.558: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:15:50.688: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:15:50.811: INFO: Waiting for service account "default" secrets (default-token-bzksv) to include dockercfg/token ...
Oct 13 10:15:50.898: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:15:51.036: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:15:51.161: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:15:51.185: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:15:51.479: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:15:52.816: INFO: Project "e2e-test-ns-global-sszjj" has been fully provisioned.
[BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  github.com/openshift/origin/test/extended/networking/util.go:350
Oct 13 10:15:53.064: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used
Oct 13 10:15:53.064: INFO: This plugin does not isolate namespaces by default.
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default
  k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:15:53.092: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-ns-global-sszjj-user}, err: <nil>
Oct 13 10:15:53.127: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-ns-global-sszjj}, err: <nil>
Oct 13 10:15:53.164: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~1OcXFmx1T-EQzhi_17o8n2wuwZQdIaEnbfkrbWdeD_A}, err: <nil>
[AfterEach] [sig-network] network isolation
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-ns-global-sszjj" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.

Stderr
_sig-auth__Feature_SecurityContextConstraints___TestPodDefaultCapabilities__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 82.0s

_sig-auth__Feature_ProjectAPI___TestProjectIsNamespace_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.5s

_sig-network__endpoints__admission_when_using_openshift-sdn_blocks_manual_creation_of_Endpoints_pointing_to_the_cluster_or_service_network__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.6s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network][endpoints] admission
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network][endpoints] admission
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:15:45.138: INFO: configPath is now "/tmp/configfile3194244932"
Oct 13 10:15:45.138: INFO: The user is now "e2e-test-endpoint-admission-ptb9c-user"
Oct 13 10:15:45.138: INFO: Creating project "e2e-test-endpoint-admission-ptb9c"
Oct 13 10:15:45.407: INFO: Waiting on permissions in project "e2e-test-endpoint-admission-ptb9c" ...
Oct 13 10:15:45.417: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:15:45.546: INFO: Waiting for service account "default" secrets () to include dockercfg/token ...
Oct 13 10:15:45.639: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:15:45.755: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:15:45.869: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:15:45.882: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:15:45.989: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:15:46.809: INFO: Project "e2e-test-endpoint-admission-ptb9c" has been fully provisioned.
[BeforeEach] when using openshift-sdn
  github.com/openshift/origin/test/extended/networking/util.go:396
Oct 13 10:15:46.961: INFO: Not using openshift-sdn
[AfterEach] [sig-network][endpoints] admission
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:15:47.020: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-endpoint-admission-ptb9c-user}, err: <nil>
Oct 13 10:15:47.061: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-endpoint-admission-ptb9c}, err: <nil>
Oct 13 10:15:47.140: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~wFHgUBINJk--dZSvaQ6tLAlhpmiBrrFDo5WKkpPraJQ}, err: <nil>
[AfterEach] [sig-network][endpoints] admission
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-endpoint-admission-ptb9c" for this suite.
skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn

Stderr
_sig-auth__Feature_OAuthServer___Headers__expected_headers_returned_from_the_token_URL__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 106.0s

_sig-cli__oc_adm_images__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 6.3s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_user.openshift.io/v1,_Resource=users__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.6s

_sig-network-edge__Conformance__Area_Networking__Feature_Router__The_HAProxy_router_should_be_able_to_connect_to_a_service_that_is_idled_because_a_GET_on_the_route_will_unidle_it__Skipped_Disconnected___Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 2.2s

Skipped: skip [github.com/openshift/origin/test/extended/router/idle.go:47]: idle feature only supported on OVNKubernetes or OpenShiftSDN
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:15:42.887: INFO: configPath is now "/tmp/configfile2506747213"
Oct 13 10:15:42.887: INFO: The user is now "e2e-test-router-idling-kmvzh-user"
Oct 13 10:15:42.887: INFO: Creating project "e2e-test-router-idling-kmvzh"
Oct 13 10:15:43.078: INFO: Waiting on permissions in project "e2e-test-router-idling-kmvzh" ...
Oct 13 10:15:43.088: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:15:43.219: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:15:43.339: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:15:43.466: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:15:43.478: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:15:43.561: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:15:44.386: INFO: Project "e2e-test-router-idling-kmvzh" has been fully provisioned.
[It] should be able to connect to a service that is idled because a GET on the route will unidle it [Skipped:Disconnected] [Suite:openshift/conformance/parallel/minimal]
  github.com/openshift/origin/test/extended/router/idle.go:43
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:15:44.443: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-router-idling-kmvzh-user}, err: <nil>
Oct 13 10:15:44.464: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-router-idling-kmvzh}, err: <nil>
Oct 13 10:15:44.508: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~-dwFYxp-NJwH9J7TmuuSlnJ64qE7CsCBOULNKlEOees}, err: <nil>
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-router-idling-kmvzh" for this suite.
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/router/idle.go:36
skip [github.com/openshift/origin/test/extended/router/idle.go:47]: idle feature only supported on OVNKubernetes or OpenShiftSDN

Stderr
_sig-builds__Feature_Builds__webhook__TestWebhookGitHubPushWithImage__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.5s

_sig-arch__Managed_cluster_should_set_requests_but_not_limits__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.0s

Failed:
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:15:43.143: Pods in platform namespaces are not following resource request/limit rules or do not have an exception granted:
  apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni does not have a cpu request (rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni/request[cpu]")
  apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni does not have a memory request (rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni/request[memory]")
  apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller does not have a cpu request (rule: "apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller/request[cpu]")
  apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller does not have a memory request (rule: "apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller/request[memory]")

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch] Managed cluster
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] should set requests but not limits [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/operators/resources.go:30
Oct 13 10:15:43.143: INFO: Pods in platform namespaces had resource request/limit that we may enforce in the future:

apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/initContainer/block-mcs does not have a cpu request (candidate rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/initContainer/block-mcs/request[cpu]")
apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/initContainer/block-mcs does not have a memory request (candidate rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/initContainer/block-mcs/request[memory]")
apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/cni-plugins does not have a cpu request (candidate rule: "apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/cni-plugins/request[cpu]")
apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/cni-plugins does not have a memory request (candidate rule: "apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/cni-plugins/request[memory]")
apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/egress-router-binary-copy does not have a cpu request (candidate rule: "apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/egress-router-binary-copy/request[cpu]")
apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/egress-router-binary-copy does not have a memory request (candidate rule: "apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/egress-router-binary-copy/request[memory]")
apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/routeoverride-cni does not have a cpu request (candidate rule: "apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/routeoverride-cni/request[cpu]")
apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/routeoverride-cni does not have a memory request (candidate rule: "apps/v1/DaemonSet/openshift-multus/multus-additional-cni-plugins/initContainer/routeoverride-cni/request[memory]")
v1/Pod/openshift-openstack-infra/coredns-<node>/initContainer/render-config-coredns does not have a cpu request (candidate rule: "v1/Pod/openshift-openstack-infra/coredns-<node>/initContainer/render-config-coredns/request[cpu]")
v1/Pod/openshift-openstack-infra/coredns-<node>/initContainer/render-config-coredns does not have a memory request (candidate rule: "v1/Pod/openshift-openstack-infra/coredns-<node>/initContainer/render-config-coredns/request[memory]")
v1/Pod/openshift-openstack-infra/haproxy-<node>/initContainer/verify-api-int-resolvable does not have a cpu request (candidate rule: "v1/Pod/openshift-openstack-infra/haproxy-<node>/initContainer/verify-api-int-resolvable/request[cpu]")
v1/Pod/openshift-openstack-infra/haproxy-<node>/initContainer/verify-api-int-resolvable does not have a memory request (candidate rule: "v1/Pod/openshift-openstack-infra/haproxy-<node>/initContainer/verify-api-int-resolvable/request[memory]")
v1/Pod/openshift-openstack-infra/keepalived-<node>/initContainer/render-config-keepalived does not have a cpu request (candidate rule: "v1/Pod/openshift-openstack-infra/keepalived-<node>/initContainer/render-config-keepalived/request[cpu]")
v1/Pod/openshift-openstack-infra/keepalived-<node>/initContainer/render-config-keepalived does not have a memory request (candidate rule: "v1/Pod/openshift-openstack-infra/keepalived-<node>/initContainer/render-config-keepalived/request[memory]")
Oct 13 10:15:43.143: FAIL: Pods in platform namespaces are not following resource request/limit rules or do not have an exception granted:
  apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni does not have a cpu request (rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni/request[cpu]")
  apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni does not have a memory request (rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni/request[memory]")
  apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller does not have a cpu request (rule: "apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller/request[cpu]")
  apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller does not have a memory request (rule: "apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller/request[memory]")

Full Stack Trace
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc002a96e68)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7f15658c2fff)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0027d34a0, 0xc002a97230, {0x83433a0, 0xc000330940})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0027d34a0, {0x83433a0, 0xc000330940})
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001068c80, 0xc0027d34a0)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001068c80)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001068c80)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00031c780, {0x8343660, 0xc002206e10}, {0x0, 0x0}, {0xc000b20060, 0x1, 0x1}, {0x843fe58, 0xc000330940}, ...)
	github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc001a4bb90, {0xc000aef460, 0xb8fc7b0, 0x457d780})
	github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be
main.newRunTestCommand.func1.1()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32
github.com/openshift/origin/test/extended/util.WithCleanup(0xc001947c18)
	github.com/openshift/origin/test/extended/util/test.go:168 +0xad
main.newRunTestCommand.func1(0xc001a61680, {0xc000aef460, 0x1, 0x1})
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a
github.com/spf13/cobra.(*Command).execute(0xc001a61680, {0xc000aef2f0, 0x1, 0x1})
	github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0xc001a60c80)
	github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.1.3/command.go:897
main.main.func1(0xc0002abc00)
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a
main.main()
	github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6
[AfterEach] [sig-arch] Managed cluster
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-arch] Managed cluster
  github.com/openshift/origin/test/extended/util/client.go:141
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:15:43.143: Pods in platform namespaces are not following resource request/limit rules or do not have an exception granted:
  apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni does not have a cpu request (rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni/request[cpu]")
  apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni does not have a memory request (rule: "apps/v1/DaemonSet/openshift-kuryr/kuryr-cni/container/kuryr-cni/request[memory]")
  apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller does not have a cpu request (rule: "apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller/request[cpu]")
  apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller does not have a memory request (rule: "apps/v1/Deployment/openshift-kuryr/kuryr-controller/container/controller/request[memory]")

Stderr
_sig-cli__oc_adm_serviceaccounts__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 8.6s

_sig-auth__Feature_OpenShiftAuthorization__authorization__TestBrowserSafeAuthorizer_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.1s

_sig-auth__Feature_RoleBindingRestrictions__RoleBindingRestrictions_should_be_functional__Create_a_rolebinding_that_also_contains_system_non-existing_users_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.1s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_adoption_will_orphan_all_RCs_and_adopt_them_back_when_recreated__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 194.0s

_sig-builds__Feature_Builds__result_image_should_have_proper_labels_set__Docker_build_from_a_template_should_create_a_image_from__test-docker-build.json__template_with_proper_Docker_labels__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 99.0s

_sig-apps__Feature_OpenShiftControllerManager__TestTriggers_manual__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-auth__Feature_OpenShiftAuthorization__RBAC_proxy_for_openshift_authz__RunLegacyClusterRoleEndpoint_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.2s

_sig-imageregistry__Feature_ImageTriggers__Image_change_build_triggers_TestSimpleImageChangeBuildTriggerFromImageStreamTagCustomWithConfigChange__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.3s

_sig-cli__oc_builds_complex_build_start-build__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 6.8s

_sig-apps__Feature_Jobs__Users_should_be_able_to_create_and_run_a_job_in_a_user_project__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 45.7s

_sig-api-machinery__Feature_APIServer__anonymous_browsers_should_get_a_403_from_/__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 2.5s

_sig-cli__oc_adm_groups__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 5.6s

_sig-network__network_isolation_when_using_OpenshiftSDN_in_a_mode_that_does_not_isolate_namespaces_by_default_should_allow_communication_between_pods_in_different_namespaces_on_the_same_node__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 85.0s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_security.openshift.io/v1,_Resource=rangeallocations__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-auth__Feature_OpenShiftAuthorization__scopes_TestTokensWithIllegalScopes_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.5s

_sig-auth__Feature_OAuthServer___Token_Expiration__Using_a_OAuth_client_with_a_non-default_token_max_age_to_generate_tokens_that_expire_shortly_works_as_expected_when_using_a_code_authorization_flow__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 35.6s

_sig-api-machinery__Feature_ServerSideApply__Server-Side_Apply_should_work_for_template.openshift.io/v1,_Resource=brokertemplateinstances__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.8s

_sig-instrumentation__Prometheus_when_installed_on_the_cluster_when_using_openshift-sdn_should_be_able_to_get_the_sdn_ovs_flows__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.9s

Skipped: skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[BeforeEach] when using openshift-sdn
  github.com/openshift/origin/test/extended/networking/util.go:396
Oct 13 10:15:05.546: INFO: Not using openshift-sdn
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn

Stderr
_sig-builds__Feature_Builds__webhook__TestWebhookGitHubPushWithImageStream__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.4s

_sig-cli__oc_adm_policy__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 10.0s

_sig-network__Feature_Router__The_HAProxy_router_converges_when_multiple_routers_are_writing_conflicting_status__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 58.8s

_sig-arch__Managed_cluster_should_ensure_pods_use_downstream_images_from_our_release_image_with_proper_ImagePullPolicy__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 4.0s

_sig-cli__oc_adm_new-project__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 15.0s

_sig-devex__Feature_Templates__template-api_TestTemplate__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 1.7s

_Conformance__sig-api-machinery__Feature_APIServer__local_kubeconfig__localhost.kubeconfig__should_be_present_on_all_masters_and_work__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 87.0s

_sig-network-edge__Conformance__Area_Networking__Feature_Router__The_HAProxy_router_should_pass_the_h2spec_conformance_tests__Suite_openshift/conformance/parallel/minimal_
no-testclass
Time Taken: 1.8s

Skipped: skip [github.com/openshift/origin/test/extended/router/h2spec.go:72]: Skip on platforms where the default router is not exposed by a load balancer service.
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:13:50.050: INFO: configPath is now "/tmp/configfile2889711689"
Oct 13 10:13:50.050: INFO: The user is now "e2e-test-router-h2spec-zl4kz-user"
Oct 13 10:13:50.050: INFO: Creating project "e2e-test-router-h2spec-zl4kz"
Oct 13 10:13:50.200: INFO: Waiting on permissions in project "e2e-test-router-h2spec-zl4kz" ...
Oct 13 10:13:50.210: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:13:50.319: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:13:50.434: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:13:50.541: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:13:50.554: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:13:50.562: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:13:51.086: INFO: Project "e2e-test-router-h2spec-zl4kz" has been fully provisioned.
[It] should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]
  github.com/openshift/origin/test/extended/router/h2spec.go:62
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:13:51.122: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-router-h2spec-zl4kz-user}, err: <nil>
Oct 13 10:13:51.138: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-router-h2spec-zl4kz}, err: <nil>
Oct 13 10:13:51.148: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~i3EA2T3WO8D-5TOvtuA-9R8czmlMAIG1YpGr3iCmTyY}, err: <nil>
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-router-h2spec-zl4kz" for this suite.
[AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router]
  github.com/openshift/origin/test/extended/router/h2spec.go:46
skip [github.com/openshift/origin/test/extended/router/h2spec.go:72]: Skip on platforms where the default router is not exposed by a load balancer service.

Stderr
_sig-devex__Feature_Templates__templateinstance_readiness_test__should_report_failed_soon_after_an_annotated_objects_has_failed__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 56.8s

_sig-imageregistry__Feature_ImageLookup__Image_policy_should_update_OpenShift_object_image_fields_when_local_names_are_on__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 7.9s

_sig-imageregistry__Feature_ImageExtract__Image_extract_should_extract_content_from_an_image__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 119.0s

_sig-auth__Feature_OAuthServer___Token_Expiration__Using_a_OAuth_client_with_a_non-default_token_max_age_to_generate_tokens_that_do_not_expire_works_as_expected_when_using_a_code_authorization_flow__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 56.1s

_sig-installer__Feature_baremetal__Baremetal_platform_should_have_baremetalhost_resources__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.1s

Skipped: skip [github.com/openshift/origin/test/extended/baremetal/hosts.go:29]: No baremetal platform detected
skipped

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:13:39.158: INFO: configPath is now "/tmp/configfile522534237"
Oct 13 10:13:39.158: INFO: The user is now "e2e-test-baremetal-48qt7-user"
Oct 13 10:13:39.158: INFO: Creating project "e2e-test-baremetal-48qt7"
Oct 13 10:13:39.510: INFO: Waiting on permissions in project "e2e-test-baremetal-48qt7" ...
Oct 13 10:13:39.517: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:13:39.679: INFO: Waiting for service account "default" secrets (default-token-zzsq8) to include dockercfg/token ...
Oct 13 10:13:39.753: INFO: Waiting for service account "default" secrets (default-token-zzsq8) to include dockercfg/token ...
Oct 13 10:13:39.833: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:13:39.959: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:13:40.097: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:13:40.110: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:13:40.295: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:13:40.948: INFO: Project "e2e-test-baremetal-48qt7" has been fully provisioned.
[It] have baremetalhost resources [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/baremetal/hosts.go:80
STEP: checking platform type
Oct 13 10:13:40.979: INFO: No baremetal platform detected
[AfterEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:140
Oct 13 10:13:41.034: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-baremetal-48qt7-user}, err: <nil>
Oct 13 10:13:41.094: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-baremetal-48qt7}, err: <nil>
Oct 13 10:13:41.137: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~mYzAcebaPVWxCBuXgSiMxdt45N67FvNQOq8WQ6L8BMg}, err: <nil>
[AfterEach] [sig-installer][Feature:baremetal] Baremetal platform should
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-baremetal-48qt7" for this suite.
skip [github.com/openshift/origin/test/extended/baremetal/hosts.go:29]: No baremetal platform detected

Stderr
_sig-instrumentation__Prometheus_when_installed_on_the_cluster_should_provide_named_network_metrics__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 133.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:603]: Unexpected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "promQL query returned unexpected results:\npod_network_name_info{pod=\"execpod\",namespace=\"e2e-test-prometheus-89962\",interface=\"eth0\"} == 0\n[]",
        },
        {
            s: "promQL query returned unexpected results:\npod_network_name_info{pod=\"execpod\",namespace=\"e2e-test-prometheus-89962\",network_name=\"e2e-test-prometheus-89962/secondary\"} == 0\n[]",
        },
    ]
    [promQL query returned unexpected results:
    pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
    [], promQL query returned unexpected results:
    pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
    []]
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[It] should provide named network metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:574
Oct 13 10:13:41.673: INFO: Creating namespace "e2e-test-prometheus-89962"
Oct 13 10:13:41.994: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:13:57.241: INFO: Creating new exec pod
STEP: verifying named metrics keys
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
Oct 13 10:14:41.486: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0"'
Oct 13 10:14:41.928: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0'\n"
Oct 13 10:14:41.928: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
Oct 13 10:14:41.928: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0"'
Oct 13 10:14:42.292: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0'\n"
Oct 13 10:14:42.293: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
Oct 13 10:14:52.296: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0"'
Oct 13 10:14:52.736: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0'\n"
Oct 13 10:14:52.736: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
Oct 13 10:14:52.736: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0"'
Oct 13 10:14:53.113: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0'\n"
Oct 13 10:14:53.113: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
Oct 13 10:15:03.117: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0"'
Oct 13 10:15:03.479: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0'\n"
Oct 13 10:15:03.479: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
Oct 13 10:15:03.479: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0"'
Oct 13 10:15:03.833: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0'\n"
Oct 13 10:15:03.833: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
Oct 13 10:15:13.834: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0"'
Oct 13 10:15:14.331: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0'\n"
Oct 13 10:15:14.331: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
Oct 13 10:15:14.331: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0"'
Oct 13 10:15:14.855: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0'\n"
Oct 13 10:15:14.856: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
Oct 13 10:15:24.856: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0"'
Oct 13 10:15:25.272: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cinterface%3D%22eth0%22%7D+%3D%3D+0'\n"
Oct 13 10:15:25.272: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
Oct 13 10:15:25.272: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-89962 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0"'
Oct 13 10:15:25.711: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=pod_network_name_info%7Bpod%3D%22execpod%22%2Cnamespace%3D%22e2e-test-prometheus-89962%22%2Cnetwork_name%3D%22e2e-test-prometheus-89962%2Fsecondary%22%7D+%3D%3D+0'\n"
Oct 13 10:15:25.711: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-89962".
STEP: Found 8 events.
Oct 13 10:15:50.846: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-89962/execpod to ostest-n5rnf-worker-0-8kq82
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:14:28 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.144.94/23] from kuryr
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:14:28 +0000 UTC - event for execpod: {multus } AddedInterface: Add net1 [10.1.1.0/24] from e2e-test-prometheus-89962/secondary
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:14:28 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest"
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:14:38 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" in 10.307581629s
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:14:38 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container agnhost-container
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:14:38 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container agnhost-container
Oct 13 10:15:50.846: INFO: At 2022-10-13 10:15:35 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container agnhost-container
Oct 13 10:15:50.879: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 13 10:15:50.879: INFO: 
Oct 13 10:15:50.886: INFO: skipping dumping cluster info - cluster too large
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-89962" for this suite.
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:603]: Unexpected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "promQL query returned unexpected results:\npod_network_name_info{pod=\"execpod\",namespace=\"e2e-test-prometheus-89962\",interface=\"eth0\"} == 0\n[]",
        },
        {
            s: "promQL query returned unexpected results:\npod_network_name_info{pod=\"execpod\",namespace=\"e2e-test-prometheus-89962\",network_name=\"e2e-test-prometheus-89962/secondary\"} == 0\n[]",
        },
    ]
    [promQL query returned unexpected results:
    pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",interface="eth0"} == 0
    [], promQL query returned unexpected results:
    pod_network_name_info{pod="execpod",namespace="e2e-test-prometheus-89962",network_name="e2e-test-prometheus-89962/secondary"} == 0
    []]
occurred

Stderr
_sig-apps__Feature_DeploymentConfig__deploymentconfigs_won't_deploy_RC_with_unresolved_images_when_patched_with_empty_image__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 83.0s

_sig-apps__Feature_DeploymentConfig__deploymentconfigs_with_multiple_image_change_triggers_should_run_a_successful_deployment_with_multiple_triggers__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 115.0s

_sig-auth__Feature_Authentication___TestFrontProxy_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.4s

_sig-imageregistry__Feature_ImageLayers__Image_layer_subresource_should_return_layers_from_tagged_images__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 119.0s

_sig-auth__Feature_OpenShiftAuthorization__self-SAR_compatibility__TestSelfSubjectAccessReviewsNonExistingNamespace_should_succeed__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.4s

_sig-builds__Feature_Builds__build_can_reference_a_cluster_service__with_a_build_being_created_from_new-build_should_be_able_to_run_a_build_that_references_a_cluster_service__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 419.0s

Failed:
fail [github.com/openshift/origin/test/extended/builds/service.go:80]: Unexpected error:
    <*errors.errorString | 0xc002105250>: {
        s: "The build \"test-1\" status is \"Failed\"",
    }
    The build "test-1" status is "Failed"
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-builds][Feature:Builds] build can reference a cluster service
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-builds][Feature:Builds] build can reference a cluster service
  github.com/openshift/origin/test/extended/util/client.go:116
Oct 13 10:13:38.835: INFO: configPath is now "/tmp/configfile3668672236"
Oct 13 10:13:38.835: INFO: The user is now "e2e-test-build-service-gtknd-user"
Oct 13 10:13:38.835: INFO: Creating project "e2e-test-build-service-gtknd"
Oct 13 10:13:39.285: INFO: Waiting on permissions in project "e2e-test-build-service-gtknd" ...
Oct 13 10:13:39.292: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:13:39.419: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Oct 13 10:13:39.540: INFO: Waiting for service account "deployer" secrets (deployer-dockercfg-7pml5,deployer-dockercfg-7pml5) to include dockercfg/token ...
Oct 13 10:13:39.676: INFO: Waiting for service account "deployer" secrets (deployer-dockercfg-7pml5,deployer-dockercfg-7pml5) to include dockercfg/token ...
Oct 13 10:13:39.737: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Oct 13 10:13:39.881: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Oct 13 10:13:39.895: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Oct 13 10:13:40.134: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Oct 13 10:13:40.835: INFO: Project "e2e-test-build-service-gtknd" has been fully provisioned.
[BeforeEach] 
  github.com/openshift/origin/test/extended/builds/service.go:31
[It] should be able to run a build that references a cluster service [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/builds/service.go:44
STEP: standing up a new hello world nodejs service via oc new-app
Oct 13 10:13:40.836: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=/tmp/configfile3668672236 new-app nodejs~https://github.com/sclorg/nodejs-ex.git --name hello-nodejs'
warning: Cannot check if git requires authentication.
--> Found image 33ddc20 (5 weeks old) in image stream "openshift/nodejs" under tag "14-ubi8" for "nodejs"

    Node.js 14 
    ---------- 
    Node.js 14 available as container is a base platform for building and running various Node.js 14 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

    Tags: builder, nodejs, nodejs14

    * A source build using source code from https://github.com/sclorg/nodejs-ex.git will be created
      * The resulting image will be pushed to image stream tag "hello-nodejs:latest"
      * Use 'oc start-build' to trigger a new build

--> Creating resources ...
    imagestream.image.openshift.io "hello-nodejs" created
    buildconfig.build.openshift.io "hello-nodejs" created
    deployment.apps "hello-nodejs" created
    service "hello-nodejs" created
--> Success
    Build scheduled, use 'oc logs -f buildconfig/hello-nodejs' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose service/hello-nodejs' 
    Run 'oc status' to view your app.
Oct 13 10:15:57.860: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:15:59.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:01.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:03.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:05.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:07.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:09.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:11.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:13.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:15.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:17.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:19.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:21.877: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:23.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:25.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:27.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:29.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:31.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:33.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:35.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:37.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:39.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:41.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:43.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:45.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:47.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:49.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:51.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:53.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:55.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:57.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:16:59.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:17:01.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:17:03.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 13 10:17:05.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"FailedCreate", Message:"Pod \"hello-nodejs-75466689c-qjq8r\" is invalid: spec.containers[0].image: Invalid value: \" \": must not have leading or trailing whitespace"}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 10, 15, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 10, 13, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"hello-nodejs-78679dbb86\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: calling oc new-build with a Dockerfile
Oct 13 10:17:07.884: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=/tmp/configfile3668672236 new-build -D - --to test:latest'
--> Found container image de1ef0c (6 days old) from image-registry.openshift-image-registry.svc:5000 for "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest"

    OpenShift Tools 
    --------------- 
    Contains debugging and diagnostic tools for use with an OpenShift cluster.

    Tags: openshift, tools

    * An image stream tag will be created as "tools:latest" that will track the source image
    * A Docker build using a predefined Dockerfile will be created
      * The resulting image will be pushed to image stream tag "test:latest"
      * Every time "tools:latest" changes a new build will be triggered

--> Creating resources with label build=test ...
    imagestream.image.openshift.io "tools" created
    imagestream.image.openshift.io "test" created
    buildconfig.build.openshift.io "test" created
--> Success
STEP: expecting the build is in Complete phase
Oct 13 10:20:34.484: INFO: WaitForABuild returning with error: The build "test-1" status is "Failed"
Oct 13 10:20:34.485: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs -f bc/test --timestamps'
Oct 13 10:20:34.729: INFO: 

  build logs : 2022-10-13T10:17:45.925499617Z Replaced Dockerfile FROM image image-registry.openshift-image-registry.svc:5000/openshift/tools:latest
2022-10-13T10:17:48.624365378Z time="2022-10-13T10:17:48Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
2022-10-13T10:17:48.628481989Z I1013 10:17:48.628428       1 defaults.go:102] Defaulting to storage driver "overlay" with options [mountopt=metacopy=on].
2022-10-13T10:17:48.701333149Z Caching blobs under "/var/cache/blobs".
2022-10-13T10:17:48.708386929Z 
2022-10-13T10:17:48.708386929Z Pulling image image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6 ...
2022-10-13T10:17:48.713374814Z Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6...
2022-10-13T10:17:49.006608015Z Getting image source signatures
2022-10-13T10:17:49.130229130Z Copying blob sha256:a2f3f5a14ad25b6ea4a3484161d2fb21e924b5fa662c4fc429d711326af500e2
2022-10-13T10:17:49.159753139Z Copying blob sha256:46ccf5d9b3e4a94e85bfed87163ba4707c06afe97a712db5e466d38d160ecfc1
2022-10-13T10:17:49.196348277Z Copying blob sha256:d033ae3b9132332cad930a5e3a796b1b70903b6f86a069aea1dcdc3cf4c2909e
2022-10-13T10:17:49.284308858Z Copying blob sha256:a80a503a1f95aeefc804ebe15440205f00c2682b566b3f41ff21f7922607f4f7
2022-10-13T10:17:49.284308858Z Copying blob sha256:237bfbffb5f297018ef21e92b8fede75d3ca63e2154236331ef2b2a9dd818a02
2022-10-13T10:17:49.284308858Z Copying blob sha256:39382676eb30fabb7a0616b064e142f6ef58d45216a9124e9358d14b12dedd65
2022-10-13T10:17:59.840437359Z Copying config sha256:de1ef0c021bf845d199099d776f711f71801769970d2548f72e44e75e86be7c1
2022-10-13T10:17:59.850565022Z Writing manifest to image destination
2022-10-13T10:17:59.853404923Z Storing signatures
2022-10-13T10:18:15.948324421Z Adding transient rw bind mount for /run/secrets/rhsm
2022-10-13T10:18:15.951296421Z STEP 1/5: FROM image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6
2022-10-13T10:18:15.996327629Z STEP 2/5: RUN cat /etc/resolv.conf
2022-10-13T10:18:17.087928026Z search e2e-test-build-service-gtknd.svc.cluster.local svc.cluster.local cluster.local ostest.shiftstack.com shiftstack.com
2022-10-13T10:18:17.087928026Z nameserver 172.30.0.10
2022-10-13T10:18:17.087928026Z options ndots:5
2022-10-13T10:18:17.231344800Z time="2022-10-13T10:18:17Z" level=warning msg="Adding metacopy option, configured globally"
2022-10-13T10:18:21.163328167Z --> 099c166becd
2022-10-13T10:18:21.197125567Z STEP 3/5: RUN curl -vvv hello-nodejs:8080
2022-10-13T10:18:21.829448631Z * Rebuilt URL to: hello-nodejs:8080/
2022-10-13T10:18:21.829448631Z   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
2022-10-13T10:18:21.829448631Z                                  Dload  Upload   Total   Spent    Left  Speed
2022-10-13T10:18:21.829448631Z 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     02022-10-13T10:18:21.833005179Z *   Trying 172.30.35.175...
2022-10-13T10:18:21.833005179Z * TCP_NODELAY set
2022-10-13T10:18:22.036481377Z 
  0     0    0     0    0     0     2022-10-13T10:18:22.036556144Z  0      0 --:2022-10-13T10:18:22.036597888Z --:-- --:--:--2022-10-13T10:18:22.036632845Z  --:--:--2022-10-13T10:18:22.036672031Z      02022-10-13T10:18:23.040261777Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     02022-10-13T10:18:24.038568328Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     02022-10-13T10:18:25.038858333Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     02022-10-13T10:18:26.039604423Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     02022-10-13T10:18:27.040718288Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     02022-10-13T10:18:28.041553536Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     02022-10-13T10:18:29.044358315Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     02022-10-13T10:18:30.050387509Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     02022-10-13T10:18:31.047671716Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     02022-10-13T10:18:32.049333039Z 
  0     0    0   2022-10-13T10:18:32.049398175Z   0    0     0      0      0 --:--:--  0:00:10 --:--:--     02022-10-13T10:18:33.050980190Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     02022-10-13T10:18:34.052504372Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     02022-10-13T10:18:35.053626101Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     02022-10-13T10:18:36.054902705Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     02022-10-13T10:18:37.056064486Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     02022-10-13T10:18:38.057256735Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     02022-10-13T10:18:39.058528000Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     02022-10-13T10:18:40.059659431Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     02022-10-13T10:18:41.060867253Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     02022-10-13T10:18:42.062607398Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     02022-10-13T10:18:43.063840458Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     02022-10-13T10:18:44.065075295Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     02022-10-13T10:18:45.065487375Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     02022-10-13T10:18:46.066525447Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     02022-10-13T10:18:47.078439613Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     02022-10-13T10:18:48.071554314Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     02022-10-13T10:18:49.073849175Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     02022-10-13T10:18:50.078696526Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     02022-10-13T10:18:51.077321402Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     02022-10-13T10:18:52.079309667Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     02022-10-13T10:18:53.081349765Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     02022-10-13T10:18:54.083400014Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     02022-10-13T10:18:55.086317073Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     02022-10-13T10:18:56.086554382Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     02022-10-13T10:18:57.088301665Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     02022-10-13T10:18:58.088532539Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     02022-10-13T10:18:59.089810126Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     02022-10-13T10:19:00.090440005Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     02022-10-13T10:19:01.091620038Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     02022-10-13T10:19:02.094355668Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     02022-10-13T10:19:03.093517300Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     02022-10-13T10:19:04.094530199Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     02022-10-13T10:19:05.096670911Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     02022-10-13T10:19:06.097177964Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     02022-10-13T10:19:07.097483680Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     02022-10-13T10:19:08.098664281Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     02022-10-13T10:19:09.099734062Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     02022-10-13T10:19:10.100770840Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     02022-10-13T10:19:11.102139909Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     02022-10-13T10:19:12.102477403Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     02022-10-13T10:19:13.103523553Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     02022-10-13T10:19:14.104945995Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     02022-10-13T10:19:15.105517919Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     02022-10-13T10:19:16.106802248Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     02022-10-13T10:19:17.107773554Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     02022-10-13T10:19:18.108489574Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     02022-10-13T10:19:19.118508641Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     02022-10-13T10:19:20.114501833Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     02022-10-13T10:19:21.115775314Z 
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     02022-10-13T10:19:22.116912656Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     02022-10-13T10:19:23.117649258Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     02022-10-13T10:19:24.118859408Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     02022-10-13T10:19:25.120114537Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     02022-10-13T10:19:26.121119631Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     02022-10-13T10:19:27.121708383Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     02022-10-13T10:19:28.129540906Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     02022-10-13T10:19:29.130739223Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     02022-10-13T10:19:30.132142440Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     02022-10-13T10:19:31.132596909Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     02022-10-13T10:19:32.133462060Z 
  0     0    0     0  2022-10-13T10:19:32.133594143Z   0     0      0      0 --:--:--  0:01:10 --:--:--   2022-10-13T10:19:32.133609375Z   02022-10-13T10:19:33.134642812Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     02022-10-13T10:19:34.136602444Z 
  0     0    0     0    0     0      0      0 --:--2022-10-13T10:19:34.136661491Z :--  0:01:12 --:--:--     02022-10-13T10:19:35.137574529Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     02022-10-13T10:19:36.137828673Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     02022-10-13T10:19:37.138505312Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     02022-10-13T10:19:38.139868674Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     02022-10-13T10:19:39.140165172Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     02022-10-13T10:19:40.141467718Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     02022-10-13T10:19:41.142636557Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     02022-10-13T10:19:42.143846791Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     02022-10-13T10:19:43.144992136Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     02022-10-13T10:19:44.146011967Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     02022-10-13T10:19:45.147180516Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     02022-10-13T10:19:46.148436594Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     02022-10-13T10:19:47.149685528Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     02022-10-13T10:19:48.150688565Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     02022-10-13T10:19:49.151591190Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     02022-10-13T10:19:50.152594178Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     02022-10-13T10:19:51.153526159Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     02022-10-13T10:19:52.154567005Z 
  0     0    0     0    0     0      0  2022-10-13T10:19:52.154665235Z     0 --:--:--  0:01:30 --:--:--     02022-10-13T10:19:53.155927844Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     02022-10-13T10:19:54.157070024Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     02022-10-13T10:19:55.157546496Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     02022-10-13T10:19:56.158528063Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     02022-10-13T10:19:57.159812668Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     02022-10-13T10:19:58.160564601Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     02022-10-13T10:19:59.161014981Z 
  0     0    0     0    0     0  2022-10-13T10:19:59.161314112Z     0   2022-10-13T10:19:59.161563155Z    0 --:--:--  0:01:37 --:--:--     02022-10-13T10:20:00.162769172Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     02022-10-13T10:20:01.163537214Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     02022-10-13T10:20:02.171332201Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     02022-10-13T10:20:03.166544204Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     02022-10-13T10:20:04.167636894Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     02022-10-13T10:20:05.168913415Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     02022-10-13T10:20:06.170083943Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     02022-10-13T10:20:07.170531470Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     02022-10-13T10:20:08.171798849Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     02022-10-13T10:20:09.173013212Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     02022-10-13T10:20:10.175431149Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     02022-10-13T10:20:11.175115109Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     02022-10-13T10:20:12.176314834Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     02022-10-13T10:20:13.177479393Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     02022-10-13T10:20:14.178698667Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     02022-10-13T10:20:15.180011586Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     02022-10-13T10:20:16.181142715Z 
  0     0    0     0    0     0      0      0 --:-2022-10-13T10:20:16.181366606Z -:--  0:01:54 --:--:--     02022-10-13T10:20:17.205314209Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     02022-10-13T10:20:18.185733545Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     02022-10-13T10:20:19.186981175Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     02022-10-13T10:20:20.190406268Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     02022-10-13T10:20:21.189719945Z 
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     02022-10-13T10:20:22.190403477Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     02022-10-13T10:20:23.191594387Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     02022-10-13T10:20:24.192499204Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     02022-10-13T10:20:25.193472909Z 
  0    2022-10-13T10:20:25.193834725Z  0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:-2022-10-13T10:20:25.193939588Z -     02022-10-13T10:20:26.194470568Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     02022-10-13T10:20:27.195567165Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     02022-10-13T10:20:28.196820537Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     02022-10-13T10:20:29.198155659Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     02022-10-13T10:20:30.199380771Z 
  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     02022-10-13T10:20:31.006021082Z * connect to 172.30.35.175 port 8080 failed: Connection timed out
2022-10-13T10:20:31.006021082Z * Failed to connect to hello-nodejs port 8080: Connection timed out
2022-10-13T10:20:31.006021082Z * Closing connection 0
2022-10-13T10:20:31.006021082Z curl: (7) Failed to connect to hello-nodejs port 8080: Connection timed out
2022-10-13T10:20:31.377481868Z error: build error: error building at STEP "RUN curl -vvv hello-nodejs:8080": error while running runtime: exit status 7


[AfterEach] 
  github.com/openshift/origin/test/extended/builds/service.go:35
Oct 13 10:20:34.730: INFO: Dumping pod state for namespace e2e-test-build-service-gtknd
Oct 13 10:20:34.730: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config get pods -o yaml'
Oct 13 10:20:34.916: INFO: apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.156.43"
            ],
            "mac": "fa:16:3e:6f:a4:3d",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.156.43"
            ],
            "mac": "fa:16:3e:6f:a4:3d",
            "default": true,
            "dns": {}
        }]
      openshift.io/build.name: hello-nodejs-1
      openshift.io/scc: privileged
    creationTimestamp: "2022-10-13T10:13:42Z"
    labels:
      openshift.io/build.name: hello-nodejs-1
    name: hello-nodejs-1-build
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: build.openshift.io/v1
      controller: true
      kind: Build
      name: hello-nodejs-1
      uid: a8be8f2a-247d-461f-8d9b-fc72b3619cb0
    resourceVersion: "939338"
    uid: 1951b41d-42f7-4f3c-a3f1-1988e2a110a9
  spec:
    activeDeadlineSeconds: 604800
    containers:
    - args:
      - openshift-sti-build
      - --loglevel=0
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"hello-nodejs-1","namespace":"e2e-test-build-service-gtknd","uid":"a8be8f2a-247d-461f-8d9b-fc72b3619cb0","resourceVersion":"933500","generation":1,"creationTimestamp":"2022-10-13T10:13:41Z","labels":{"app":"hello-nodejs","app.kubernetes.io/component":"hello-nodejs","app.kubernetes.io/instance":"hello-nodejs","buildconfig":"hello-nodejs","openshift.io/build-config.name":"hello-nodejs","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"hello-nodejs","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"hello-nodejs","uid":"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"hello-nodejs"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:13:41Z","lastTransitionTime":"2022-10-13T10:13:41Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex.git
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex.git
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: PUSH_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/push
      - name: PULL_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/pull
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_STORAGE_DRIVER
        value: overlay
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: sti-build
      resources: {}
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/lib/kubelet/config.json
        name: node-pullsecrets
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/lib/containers/cache
        name: buildcachedir
      - mountPath: /var/run/secrets/openshift.io/push
        name: builder-dockercfg-kkd9h-push
        readOnly: true
      - mountPath: /var/run/secrets/openshift.io/pull
        name: builder-dockercfg-kkd9h-pull
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/lib/containers/storage
        name: container-storage-root
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-h2xw9
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: builder-dockercfg-kkd9h
    initContainers:
    - args:
      - openshift-git-clone
      - --loglevel=0
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"hello-nodejs-1","namespace":"e2e-test-build-service-gtknd","uid":"a8be8f2a-247d-461f-8d9b-fc72b3619cb0","resourceVersion":"933500","generation":1,"creationTimestamp":"2022-10-13T10:13:41Z","labels":{"app":"hello-nodejs","app.kubernetes.io/component":"hello-nodejs","app.kubernetes.io/instance":"hello-nodejs","buildconfig":"hello-nodejs","openshift.io/build-config.name":"hello-nodejs","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"hello-nodejs","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"hello-nodejs","uid":"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"hello-nodejs"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:13:41Z","lastTransitionTime":"2022-10-13T10:13:41Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex.git
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex.git
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: git-clone
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-h2xw9
        readOnly: true
    - args:
      - openshift-manage-dockerfile
      - --loglevel=0
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"hello-nodejs-1","namespace":"e2e-test-build-service-gtknd","uid":"a8be8f2a-247d-461f-8d9b-fc72b3619cb0","resourceVersion":"933500","generation":1,"creationTimestamp":"2022-10-13T10:13:41Z","labels":{"app":"hello-nodejs","app.kubernetes.io/component":"hello-nodejs","app.kubernetes.io/instance":"hello-nodejs","buildconfig":"hello-nodejs","openshift.io/build-config.name":"hello-nodejs","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"hello-nodejs","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"hello-nodejs","uid":"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"hello-nodejs"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:13:41Z","lastTransitionTime":"2022-10-13T10:13:41Z"}]}}
      - name: LANG
        value: C.utf8
      - name: SOURCE_REPOSITORY
        value: https://github.com/sclorg/nodejs-ex.git
      - name: SOURCE_URI
        value: https://github.com/sclorg/nodejs-ex.git
      - name: ALLOWED_UIDS
        value: 1-
      - name: DROP_CAPS
        value: KILL,MKNOD,SETGID,SETUID
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: manage-dockerfile
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-h2xw9
        readOnly: true
    nodeName: ostest-n5rnf-worker-0-94fxs
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Never
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: builder
    serviceAccountName: builder
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - hostPath:
        path: /var/lib/kubelet/config.json
        type: File
      name: node-pullsecrets
    - hostPath:
        path: /var/lib/containers/cache
        type: ""
      name: buildcachedir
    - emptyDir: {}
      name: buildworkdir
    - name: builder-dockercfg-kkd9h-push
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-kkd9h
    - name: builder-dockercfg-kkd9h-pull
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-kkd9h
    - configMap:
        defaultMode: 420
        name: hello-nodejs-1-sys-config
      name: build-system-configs
    - configMap:
        defaultMode: 420
        items:
        - key: service-ca.crt
          path: certs.d/image-registry.openshift-image-registry.svc:5000/ca.crt
        name: hello-nodejs-1-ca
      name: build-ca-bundles
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: hello-nodejs-1-global-ca
      name: build-proxy-ca-bundles
    - emptyDir: {}
      name: container-storage-root
    - emptyDir: {}
      name: build-blob-cache
    - name: kube-api-access-h2xw9
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:14:39Z"
      reason: PodCompleted
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:15:53Z"
      reason: PodCompleted
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:15:53Z"
      reason: PodCompleted
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:13:42Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://dc5c93c0abb3e60ff1a2d3b7cd4fa15cccb73f375db947b57eeb488df72f2ba6
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: sti-build
      ready: false
      restartCount: 0
      started: false
      state:
        terminated:
          containerID: cri-o://dc5c93c0abb3e60ff1a2d3b7cd4fa15cccb73f375db947b57eeb488df72f2ba6
          exitCode: 0
          finishedAt: "2022-10-13T10:15:53Z"
          reason: Completed
          startedAt: "2022-10-13T10:14:40Z"
    hostIP: 10.196.2.169
    initContainerStatuses:
    - containerID: cri-o://c729937fef01e7d6f29729b43a1d9767dfc80ca637b73664fa44eb7950e21da7
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: git-clone
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://c729937fef01e7d6f29729b43a1d9767dfc80ca637b73664fa44eb7950e21da7
          exitCode: 0
          finishedAt: "2022-10-13T10:14:38Z"
          reason: Completed
          startedAt: "2022-10-13T10:14:35Z"
    - containerID: cri-o://126f1c5511bfb728ab73dcc8291b5f23c0805c8d5c46191e2c109dce308d9377
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: manage-dockerfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://126f1c5511bfb728ab73dcc8291b5f23c0805c8d5c46191e2c109dce308d9377
          exitCode: 0
          finishedAt: "2022-10-13T10:14:39Z"
          reason: Completed
          startedAt: "2022-10-13T10:14:39Z"
    phase: Succeeded
    podIP: 10.128.156.43
    podIPs:
    - ip: 10.128.156.43
    qosClass: BestEffort
    startTime: "2022-10-13T10:13:42Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.157.248"
            ],
            "mac": "fa:16:3e:c0:02:96",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.157.248"
            ],
            "mac": "fa:16:3e:c0:02:96",
            "default": true,
            "dns": {}
        }]
      openshift.io/generated-by: OpenShiftNewApp
      openshift.io/scc: restricted
    creationTimestamp: "2022-10-13T10:15:53Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    generateName: hello-nodejs-78679dbb86-
    labels:
      deployment: hello-nodejs
      pod-template-hash: 78679dbb86
    name: hello-nodejs-78679dbb86-7j7fd
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: hello-nodejs-78679dbb86
      uid: abacd506-8125-46a1-adfd-6899fe289f53
    resourceVersion: "941105"
    uid: a4b7e06b-1f84-4711-b1bb-81b5022c0470
  spec:
    containers:
    - image: image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e
      imagePullPolicy: IfNotPresent
      name: hello-nodejs
      ports:
      - containerPort: 8080
        protocol: TCP
      resources: {}
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1010520000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-nrsp2
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-dockercfg-78824
    nodeName: ostest-n5rnf-worker-0-8kq82
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1010520000
      seLinuxOptions:
        level: s0:c103,c7
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: kube-api-access-nrsp2
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:15:53Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:17:06Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:17:06Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:15:53Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://e305318ec6f36ebe471b3be8974f9c98d63e3a16bc5eb08f4c7bee061c7e8e52
      image: image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e
      imageID: image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e
      lastState: {}
      name: hello-nodejs
      ready: true
      restartCount: 0
      started: true
      state:
        running:
          startedAt: "2022-10-13T10:17:06Z"
    hostIP: 10.196.2.72
    phase: Running
    podIP: 10.128.157.248
    podIPs:
    - ip: 10.128.157.248
    qosClass: BestEffort
    startTime: "2022-10-13T10:15:53Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/network-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.157.22"
            ],
            "mac": "fa:16:3e:23:da:ef",
            "default": true,
            "dns": {}
        }]
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "kuryr",
            "interface": "eth0",
            "ips": [
                "10.128.157.22"
            ],
            "mac": "fa:16:3e:23:da:ef",
            "default": true,
            "dns": {}
        }]
      openshift.io/build.name: test-1
      openshift.io/scc: privileged
    creationTimestamp: "2022-10-13T10:17:08Z"
    finalizers:
    - kuryr.openstack.org/pod-finalizer
    labels:
      openshift.io/build.name: test-1
    name: test-1-build
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: build.openshift.io/v1
      controller: true
      kind: Build
      name: test-1
      uid: 31cfb654-58df-419c-b6f9-d6d51803798a
    resourceVersion: "947549"
    uid: ae9c8896-8786-496f-8485-d865d4d0c6d7
  spec:
    activeDeadlineSeconds: 604800
    containers:
    - args:
      - openshift-docker-build
      - --loglevel=0
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-1","namespace":"e2e-test-build-service-gtknd","uid":"31cfb654-58df-419c-b6f9-d6d51803798a","resourceVersion":"941148","generation":1,"creationTimestamp":"2022-10-13T10:17:08Z","labels":{"build":"test","buildconfig":"test","openshift.io/build-config.name":"test","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test","uid":"ce193f4e-89d4-4552-9e49-5e309e006da9","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:17:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce193f4e-89d4-4552-9e49-5e309e006da9\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"\nFROM image-registry.openshift-image-registry.svc:5000/openshift/tools:latest\nRUN cat /etc/resolv.conf\nRUN curl -vvv hello-nodejs:8080\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Build configuration change"}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"test"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:17:08Z","lastTransitionTime":"2022-10-13T10:17:08Z"}]}}
      - name: LANG
        value: C.utf8
      - name: PUSH_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/push
      - name: PULL_DOCKERCFG_PATH
        value: /var/run/secrets/openshift.io/pull
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_STORAGE_DRIVER
        value: overlay
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: docker-build
      resources: {}
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /var/lib/kubelet/config.json
        name: node-pullsecrets
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/lib/containers/cache
        name: buildcachedir
      - mountPath: /var/run/secrets/openshift.io/push
        name: builder-dockercfg-kkd9h-push
        readOnly: true
      - mountPath: /var/run/secrets/openshift.io/pull
        name: builder-dockercfg-kkd9h-pull
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/lib/containers/storage
        name: container-storage-root
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-vhgbb
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: builder-dockercfg-kkd9h
    initContainers:
    - args:
      - openshift-manage-dockerfile
      - --loglevel=0
      env:
      - name: BUILD
        value: |
          {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-1","namespace":"e2e-test-build-service-gtknd","uid":"31cfb654-58df-419c-b6f9-d6d51803798a","resourceVersion":"941148","generation":1,"creationTimestamp":"2022-10-13T10:17:08Z","labels":{"build":"test","buildconfig":"test","openshift.io/build-config.name":"test","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test","uid":"ce193f4e-89d4-4552-9e49-5e309e006da9","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:17:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce193f4e-89d4-4552-9e49-5e309e006da9\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"\nFROM image-registry.openshift-image-registry.svc:5000/openshift/tools:latest\nRUN cat /etc/resolv.conf\nRUN curl -vvv hello-nodejs:8080\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Build configuration change"}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"test"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:17:08Z","lastTransitionTime":"2022-10-13T10:17:08Z"}]}}
      - name: LANG
        value: C.utf8
      - name: BUILD_REGISTRIES_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/registries.conf
      - name: BUILD_REGISTRIES_DIR_PATH
        value: /var/run/configs/openshift.io/build-system/registries.d
      - name: BUILD_SIGNATURE_POLICY_PATH
        value: /var/run/configs/openshift.io/build-system/policy.json
      - name: BUILD_STORAGE_CONF_PATH
        value: /var/run/configs/openshift.io/build-system/storage.conf
      - name: BUILD_BLOBCACHE_DIR
        value: /var/cache/blobs
      - name: HTTP_PROXY
      - name: HTTPS_PROXY
      - name: NO_PROXY
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imagePullPolicy: IfNotPresent
      name: manage-dockerfile
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
      - mountPath: /tmp/build
        name: buildworkdir
      - mountPath: /var/run/configs/openshift.io/build-system
        name: build-system-configs
        readOnly: true
      - mountPath: /var/run/configs/openshift.io/certs
        name: build-ca-bundles
      - mountPath: /var/run/configs/openshift.io/pki
        name: build-proxy-ca-bundles
      - mountPath: /var/cache/blobs
        name: build-blob-cache
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-vhgbb
        readOnly: true
    nodeName: ostest-n5rnf-worker-0-8kq82
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Never
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: builder
    serviceAccountName: builder
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - hostPath:
        path: /var/lib/containers/cache
        type: ""
      name: buildcachedir
    - emptyDir: {}
      name: buildworkdir
    - hostPath:
        path: /var/lib/kubelet/config.json
        type: File
      name: node-pullsecrets
    - name: builder-dockercfg-kkd9h-push
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-kkd9h
    - name: builder-dockercfg-kkd9h-pull
      secret:
        defaultMode: 384
        secretName: builder-dockercfg-kkd9h
    - configMap:
        defaultMode: 420
        name: test-1-sys-config
      name: build-system-configs
    - configMap:
        defaultMode: 420
        items:
        - key: service-ca.crt
          path: certs.d/image-registry.openshift-image-registry.svc:5000/ca.crt
        name: test-1-ca
      name: build-ca-bundles
    - configMap:
        defaultMode: 420
        items:
        - key: ca-bundle.crt
          path: tls-ca-bundle.pem
        name: test-1-global-ca
      name: build-proxy-ca-bundles
    - emptyDir: {}
      name: container-storage-root
    - emptyDir: {}
      name: build-blob-cache
    - name: kube-api-access-vhgbb
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
        - configMap:
            items:
            - key: service-ca.crt
              path: service-ca.crt
            name: openshift-service-ca.crt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:17:46Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:20:32Z"
      message: 'containers with unready status: [docker-build]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:20:32Z"
      message: 'containers with unready status: [docker-build]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2022-10-13T10:17:08Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://1330a6c507d6bda1d8517057d3a0a7d60ca0e5cb0d0d4f7b6116125030cf6efc
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: docker-build
      ready: false
      restartCount: 0
      started: false
      state:
        terminated:
          containerID: cri-o://1330a6c507d6bda1d8517057d3a0a7d60ca0e5cb0d0d4f7b6116125030cf6efc
          exitCode: 1
          finishedAt: "2022-10-13T10:20:31Z"
          message: " 0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0
            \    0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0
            \    0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0
            \    0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--
            \    0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51
            --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--
            \ 0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0
            --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0
            \     0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0
            \     0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0
            \   0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0
            \   0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r
            \ 0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--
            \    0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59
            --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--
            \ 0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0
            --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0
            \     0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0
            \     0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0
            \   0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0
            \   0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r
            \ 0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--
            \    0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:07
            --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--
            \ 0:02:08 --:--:--     0* connect to 172.30.35.175 port 8080 failed: Connection
            timed out\n* Failed to connect to hello-nodejs port 8080: Connection timed
            out\n* Closing connection 0\ncurl: (7) Failed to connect to hello-nodejs
            port 8080: Connection timed out\nerror: build error: error building at
            STEP \"RUN curl -vvv hello-nodejs:8080\": error while running runtime:
            exit status 7\n"
          reason: Error
          startedAt: "2022-10-13T10:17:46Z"
    hostIP: 10.196.2.72
    initContainerStatuses:
    - containerID: cri-o://8c61b3e367ee156d61d63eb22a9aea91f598fb8c5ce4403a4d222065f90f994c
      image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
      lastState: {}
      name: manage-dockerfile
      ready: true
      restartCount: 0
      state:
        terminated:
          containerID: cri-o://8c61b3e367ee156d61d63eb22a9aea91f598fb8c5ce4403a4d222065f90f994c
          exitCode: 0
          finishedAt: "2022-10-13T10:17:45Z"
          reason: Completed
          startedAt: "2022-10-13T10:17:45Z"
    phase: Running
    podIP: 10.128.157.22
    podIPs:
    - ip: 10.128.157.22
    qosClass: BestEffort
    startTime: "2022-10-13T10:17:08Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
Oct 13 10:20:34.916: INFO: Dumping configMap state for namespace e2e-test-build-service-gtknd
Oct 13 10:20:34.916: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config get configmaps -o yaml'
Oct 13 10:20:35.063: INFO: apiVersion: v1
items:
- apiVersion: v1
  data:
    service-ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE
      Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe
      Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z
      aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG
      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt
      HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2
      C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf
      9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m
      mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02
      k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh
      MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313
      5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6
      STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn
      6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3
      tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM
      CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e
      zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB
      MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:13:42Z"
    name: hello-nodejs-1-ca
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: hello-nodejs-1-build
      uid: 1951b41d-42f7-4f3c-a3f1-1988e2a110a9
    resourceVersion: "933554"
    uid: 98d0c36b-b5a7-44c7-be98-da3b1145f5c1
- apiVersion: v1
  data:
    ca-bundle.crt: ""
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:13:42Z"
    name: hello-nodejs-1-global-ca
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: hello-nodejs-1-build
      uid: 1951b41d-42f7-4f3c-a3f1-1988e2a110a9
    resourceVersion: "933563"
    uid: 2ca7619a-361b-4469-9dfe-986225198cb1
- apiVersion: v1
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:13:42Z"
    name: hello-nodejs-1-sys-config
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: hello-nodejs-1-build
      uid: 1951b41d-42f7-4f3c-a3f1-1988e2a110a9
    resourceVersion: "933560"
    uid: bf7a9632-305f-4126-8476-5f3b78635160
- apiVersion: v1
  data:
    ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDMjCCAhqgAwIBAgIILN1CKhOBc2UwDQYJKoZIhvcNAQELBQAwNzESMBAGA1UE
      CxMJb3BlbnNoaWZ0MSEwHwYDVQQDExhrdWJlLWFwaXNlcnZlci1sYi1zaWduZXIw
      HhcNMjIxMDExMTYwMjIzWhcNMzIxMDA4MTYwMjIzWjA3MRIwEAYDVQQLEwlvcGVu
      c2hpZnQxITAfBgNVBAMTGGt1YmUtYXBpc2VydmVyLWxiLXNpZ25lcjCCASIwDQYJ
      KoZIhvcNAQEBBQADggEPADCCAQoCggEBANuVs0Z9M+eZOvZAbxX1JEXhGJ7cFlW+
      q1ZHT9zSgI6Riga/Jw/NjL+kjnhxsqz3ez/aDsva2zPmXaOZ2FjW7peUOMh089n0
      n5WbEB0tBNCZCBOpXvWu3/2wqfLfa8hl+YpbU+pQvO7mXqMdrIzinJpLbl20HlfA
      jlhTWSGAPqZft4hJzjel2SZiIUlCnp7FrEG42JFxREExuSkoPLhWRC0xfFB5pA9V
      JklEsBVb23M4Vti/BfwukvAiplx2X69+Qc9fXm7i+L45eSc9yQss5X67/1z7RsPa
      n3708K8JGFeXYuJ6nYQooQbhj3cvxtY31TPxIKcQE1FJa0Qmft+VYZkCAwEAAaNC
      MEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJlv
      mLJKYamTvm9Ks5bqMTNNbuFwMA0GCSqGSIb3DQEBCwUAA4IBAQCMEXtW2kb4gCyF
      NqW2f5ABK+9eMe9MjGUNYDY2kdYMwiw/nz89kwt/a3Ck5mTHnZIENNjTkYdv2wTC
      DFFCXQJFbSqyCpfEaTuCRpsBM4sZJrZdpjW74aqo7KwyQ3Gm9fClJuGfa2QF/gWU
      v7QF/8u732NVWC6DUUzu6xBMrTDnOjtKeMJ5PvfUpZv9u/RvWmkHBpQZfroBvuDy
      8PDJUjgJj0k/gIXljO3K9yLUHw76lKimmXdn5JR/UjZasQVY3t5FMDt1No6VjpLt
      811ELzxHsYsrzbeKlzBbZko1EIhIV9b5DXmykivnucJJC6gNrXnd4RMp/yHrdluN
      e5IpzDw7
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDQDCCAiigAwIBAgIICo9mBwuOce4wDQYJKoZIhvcNAQELBQAwPjESMBAGA1UE
      CxMJb3BlbnNoaWZ0MSgwJgYDVQQDEx9rdWJlLWFwaXNlcnZlci1sb2NhbGhvc3Qt
      c2lnbmVyMB4XDTIyMTAxMTE2MDIyMloXDTMyMTAwODE2MDIyMlowPjESMBAGA1UE
      CxMJb3BlbnNoaWZ0MSgwJgYDVQQDEx9rdWJlLWFwaXNlcnZlci1sb2NhbGhvc3Qt
      c2lnbmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqeSZnR3XSMrI
      As3vxbqT8KadC2vLa1Sv5VnnMnEaMzuJ0R0AwIgLDOVhNQKMN6KKnrHcdXhBuBT9
      kSgSKp4zlw65L7Eomgz2pGTqXrSL06xaXaxUXt7XxqDwEBEEueTacjSEkFbuSVLs
      x9alZYzg9ExhAz7za665/03tTEa+4bglAwqnw7/3xEauH7tyP+d3niLSewwXg8UF
      JtxZ7CHMKy/afV9+q61I6ULkj+V+Lt9eo11ucYTnJzmlGEac/n7fLj++lFwiafzq
      GxamgCaXBo6INUpX/8x2KZemHEXMYMRnsNHRmXjZi7PJIEP4doPxWEDS6reuS0P5
      urUkyOHfAQIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB
      /zAdBgNVHQ4EFgQUP6qELERdYc51gPE2PEiS/skbuEcwDQYJKoZIhvcNAQELBQAD
      ggEBABzoKx1Od3m2Koc5+g4SAFZT1+1LYBC8c+ew3v9mizzH6X5kXopdJkFZtHEN
      GBnd8Dlmjwu+DBppYWBvTz1/hC2+pZSVO4lbEWHeRB28unvzRfdT49OtADyCi0b4
      +Mr4C8BYb9FnfPXrMK1o7a8TW+NiV+Q5jeNnWSgqohV0U6peSFtHLWkfm3jF7xLL
      FrWPxiISIz37nPIIDdUrlNPVaNAI1kdynxC58faJJXfO+wWn/7ShvglL+sYhnL+K
      Fh2Nbqv6p+hBHLJ2BOLQNwuGDv2LNZ+/hHUCboDaSEBh0AhTiGYzLWvtMeF6WGGI
      HyS+I56cBeKvPQzlFdone09rvqo=
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDTDCCAjSgAwIBAgIIOwJx6MDGIWYwDQYJKoZIhvcNAQELBQAwRDESMBAGA1UE
      CxMJb3BlbnNoaWZ0MS4wLAYDVQQDEyVrdWJlLWFwaXNlcnZlci1zZXJ2aWNlLW5l
      dHdvcmstc2lnbmVyMB4XDTIyMTAxMTE2MDIyMloXDTMyMTAwODE2MDIyMlowRDES
      MBAGA1UECxMJb3BlbnNoaWZ0MS4wLAYDVQQDEyVrdWJlLWFwaXNlcnZlci1zZXJ2
      aWNlLW5ldHdvcmstc2lnbmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
      AQEAsC+Rrx7G1shNCywb0QxGuLYAzSoo3ML6l2KVR9NHydMQBDiOFd0+Sc7mczzu
      DoA70JPRyApjCm2QsZ1hNGV4WvDYzYemVQJgN1h8ogooohJNGieN9fnkfTiG96Sz
      0klaylWtr2WF0W6zyDMjT9DaRdQl9Th1lNBUFF3cwY+XIzzSZdS1ErUj1H6rzcdh
      HDoLmsuKkU9iQXDaOEhZ6xVEEF0P9Ich9PhsDjut6mmyC+bAOMNd+nqgzeX1JCC/
      wlEhSV6TWIhxj5N8Ug/lsevxtq0HQLMaBowCmjBzuvc93WfndxGzcWFKqjNq5ZMW
      j8qbGel+3n0buQrjsE8384bAbwIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYD
      VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUBWOF9EVp9ugxbTYWOonVZLpqHjUwDQYJ
      KoZIhvcNAQELBQADggEBAIoS1fo2hRMp0iBRzIkl7B6ELDmWl7t6lZVp9qxYgbk+
      O5eBuuh5b4ZDKwFt74IlvLvXJTESGMrEPo47hf+FmJPbqrBx3Dc4OsTwkhVwmdzb
      CfEUzCYtVV2lKOH5EeMG6lb5wbTznYl/W0Vh4qZ6qNSRPwwSeMf0OWtdXu89QEm5
      F5T6GVlSZXBqs1AzuljEbBa9i/ExAenOQBqWow0JeTkWV1AgngIOh5+wBSOHYeaD
      154r0GVaDixcRvB1KC+QzOyHzSUkjlnKzzsY09qiY2Ne6PfXDLm6TCzI6vqtUM19
      dK/uFHtl/UwN9BreR7iElcZUr+c8U8lSFOSm66JmkeI=
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDlzCCAn+gAwIBAgIIfks7M1UA4OowDQYJKoZIhvcNAQELBQAwWTFXMFUGA1UE
      AwxOb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2xvY2FsaG9zdC1y
      ZWNvdmVyeS1zZXJ2aW5nLXNpZ25lckAxNjY1NTA0ODk3MB4XDTIyMTAxMTE2MTQ1
      N1oXDTMyMTAwODE2MTQ1OFowWTFXMFUGA1UEAwxOb3BlbnNoaWZ0LWt1YmUtYXBp
      c2VydmVyLW9wZXJhdG9yX2xvY2FsaG9zdC1yZWNvdmVyeS1zZXJ2aW5nLXNpZ25l
      ckAxNjY1NTA0ODk3MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA70nx
      R0LL9lcuXjtZoAIdPQBb4pHxv2d2ClCxNsWTnQYiMPL6xUlDXLrzLeM21dsmHi7h
      Kmsxfyk/dkXIO5v8j1EA52L0hMUTVaxxisZo9WCAimDuwIhkDffhYKyXxztB75A5
      OheKWWdq+HioM3cDhRZi9ifPv10PfPpKPK660bCOzQDJXnvrgI8P3OdjCILzu0ZL
      GVJiqFJX8gHt+I7EaWRsZZmomhmwdg28j/MevgYoF91aTXK9skbaEEjABtgytRqQ
      udTM1lS8G6A/ezOEkobJxKk65FQ9Gld0Wc36BVA85v+EiXK7selhHTozueo34nLP
      gwRJUU11Pw2PI6vyfwIDAQABo2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/
      BAUwAwEB/zAdBgNVHQ4EFgQUybhbyl062rBbI8U++BRyn6Ufx1kwHwYDVR0jBBgw
      FoAUybhbyl062rBbI8U++BRyn6Ufx1kwDQYJKoZIhvcNAQELBQADggEBAD8ZXhK4
      7GJLcjRCTNFCuOoZoxniIFePyz+vywNk+nVADNbWHsbTYPr5lrdqNumzop7uQhj5
      m0gBnEq9WFQvf8aYrkm3Y+qxs8+MyioshINFzNIej3EcE1qBmh84IjiHE9YWjYCe
      WKKNMRZopFx9ZAY3Qky8zgAPKKE8P7xTvHdNKV8T80qgei74D810niig8rwmthOU
      KcDbcigPykla3bJ3hEQCQI0Y0xLzptEZMb8jlSVlfVx/WAuyfVnPSRBHwyey3gpQ
      sXuMng2EzLIaODEuoRRHgTEfqRT1d20+rCXz/XQTsCHjtn3Yx6Nu44FO6oTm1sAb
      XQOxjoXGgUv7o2M=
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDbzCCAlegAwIBAgIIY75bKNpoEAEwDQYJKoZIhvcNAQELBQAwJjEkMCIGA1UE
      AwwbaW5ncmVzcy1vcGVyYXRvckAxNjY1NTA1MDM5MB4XDTIyMTAxMTE2MTg1N1oX
      DTI0MTAxMDE2MTg1OFowJzElMCMGA1UEAwwcKi5hcHBzLm9zdGVzdC5zaGlmdHN0
      YWNrLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKJA0zaaSN20
      Q5BTuruRbaGcTbybOdVWiYrmi8PrgXnk8obLF4W4Bmtsb/wpdc5M5BAP/rZtl4WF
      FlAfynzuPWlEIbMwgfFlKVG7l1gWWGmUvUnSev713+dfEQyFSgKYVH/AxkpzOn1f
      dONQ6vJ4QzmKAUpm7Bp00SuVvY0UL1+5jzv1SVpohyJ4UmYQuOOpjkMPoJYqLPNF
      cM6U910MyqViK7UH0NyNMB0Mh19byJvBlhfRLHw7Fvw+sPtnQN7iabTIHphaSrZI
      tDdFzLLtf+PMbLl6w5k18ZicH9J5EPyPuDz/zLkMDKaSpTr8CsCzwyMceM9IwTBC
      TDcIU8C8fH8CAwEAAaOBnzCBnDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYI
      KwYBBQUHAwEwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUNSJv6olBlRnaqXAwdaZy
      sp7dGLMwHwYDVR0jBBgwFoAUD1SAeJJkWGq+U06gBT1344dhVlgwJwYDVR0RBCAw
      HoIcKi5hcHBzLm9zdGVzdC5zaGlmdHN0YWNrLmNvbTANBgkqhkiG9w0BAQsFAAOC
      AQEAj/YFuJJPU3E/VansQjzpWhFVOjbaplfaYn1gvsEyokQnuxAAOzAfqvjnEHrU
      xVVJV13ckcjJ7VIUUy5wGf7CgJRLXPbjJBtOBDm2WyIf0qULQKG+tJ67+eh81BWq
      DnIrpL8QbiPzl9ufkbQCTifeli2yPiyNepn5d4b+RdhGVPS9sLZiU3SBqa5Tavtl
      T/HNrqWf+0F/yTtmIKs00d5lN5+/8bJcds2S4g9C2dqeIMLZnmVTgD1H9Ky17B1J
      /SRnHd1THpQ3HiCg/aPzlyT2S9kswzzo0DA8WFtuD1pbMeERPWu0gSJtUmGu+htr
      3HAqITRplOUs+7rAvSG/ZbRyaQ==
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDDDCCAfSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtpbmdy
      ZXNzLW9wZXJhdG9yQDE2NjU1MDUwMzkwHhcNMjIxMDExMTYxNzE4WhcNMjQxMDEw
      MTYxNzE5WjAmMSQwIgYDVQQDDBtpbmdyZXNzLW9wZXJhdG9yQDE2NjU1MDUwMzkw
      ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZk9YqsZXxy/YkoT+RarcI
      Ko20B7xhiThks1rVncJ2HBUo8V3hurUO5tOrAAbIeMYj/GzdllCciTAhgpV65lGg
      GwklkBuRSp8rhqrsqpePoNbyLiHg97Pv5PDcrpwfvVBd3kPPQhgpWNaNIctNBQMD
      fSBQqbW+Qq0/mOcqVRmew9LRr9VDY/FH9mjk1s5kp/d7YdpveTf7o9Ay6tW/Jmm+
      An8CteDngHcDT03etReUOZvhSb9yt52Wry8uisfdmZmNZ0ZMNSVJWctTWjSsknhW
      1gHpDPWNlz7DKYrzjaKt5U2WYmQ7gNeZ4MOJHzx5FNvjc9y3oDYN/WKQxbQ/dAdN
      AgMBAAGjRTBDMA4GA1UdDwEB/wQEAwICpDASBgNVHRMBAf8ECDAGAQH/AgEAMB0G
      A1UdDgQWBBQPVIB4kmRYar5TTqAFPXfjh2FWWDANBgkqhkiG9w0BAQsFAAOCAQEA
      VscU7ev2DCrEl8qxDhgqCZesY+i2HmQPS6lMm/kvwpXskDnSJtt5y9WJrY0OnOdc
      W2MDcDSbMckZ8ripMFPIfETtuCCAJTnkGa31eNOB4VvqeTf0LDJtK/zAUVKDvd8K
      Yc3dDeutLpwAJwwSLeQrEw2FTVfWp4RY82OqHiXvoihIYlTSfmgrMMXylPpCHY+l
      ZvC144hMh/TV3W+xyJmh0EQ3LBE4zLqFv2ysyQ4o6lhwdmFPAmEJ37oc6tb3ZKQA
      VpfACCP/POIw45BPmeBkggEw9KjpLyB1K1G8wvDgeOTSBTK7in801xsA9ckosS7F
      a3dfOThY2ElYs2djq3Dr1w==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    annotations:
      kubernetes.io/description: Contains a CA bundle that can be used to verify the
        kube-apiserver when using internal endpoints such as the internal service
        IP or kubernetes.default.svc. No other usage is guaranteed across distributions
        of Kubernetes clusters.
    creationTimestamp: "2022-10-13T10:13:38Z"
    name: kube-root-ca.crt
    namespace: e2e-test-build-service-gtknd
    resourceVersion: "933075"
    uid: 146177f9-47c8-4fbd-8d2a-fec738e9d1d7
- apiVersion: v1
  data:
    service-ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE
      Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe
      Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z
      aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG
      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt
      HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2
      C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf
      9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m
      mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02
      k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh
      MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313
      5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6
      STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn
      6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3
      tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM
      CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e
      zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB
      MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    annotations:
      service.beta.openshift.io/inject-cabundle: "true"
    creationTimestamp: "2022-10-13T10:13:38Z"
    name: openshift-service-ca.crt
    namespace: e2e-test-build-service-gtknd
    resourceVersion: "933096"
    uid: 73331e2e-ea8e-45fe-8226-7e572a037ee6
- apiVersion: v1
  data:
    service-ca.crt: |
      -----BEGIN CERTIFICATE-----
      MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE
      Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe
      Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z
      aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG
      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt
      HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2
      C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf
      9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m
      mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02
      k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh
      MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313
      5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6
      STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn
      6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3
      tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM
      CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e
      zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB
      MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ==
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:17:08Z"
    name: test-1-ca
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: test-1-build
      uid: ae9c8896-8786-496f-8485-d865d4d0c6d7
    resourceVersion: "941151"
    uid: 2baa69b7-a87e-4667-8d38-da02bf033e86
- apiVersion: v1
  data:
    ca-bundle.crt: ""
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:17:08Z"
    name: test-1-global-ca
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: test-1-build
      uid: ae9c8896-8786-496f-8485-d865d4d0c6d7
    resourceVersion: "941156"
    uid: 8fe48930-fbfe-4654-bd2e-5c8ed0cd818a
- apiVersion: v1
  kind: ConfigMap
  metadata:
    creationTimestamp: "2022-10-13T10:17:08Z"
    name: test-1-sys-config
    namespace: e2e-test-build-service-gtknd
    ownerReferences:
    - apiVersion: v1
      kind: Pod
      name: test-1-build
      uid: ae9c8896-8786-496f-8485-d865d4d0c6d7
    resourceVersion: "941154"
    uid: b8461565-9b95-4b31-b9df-34ad65998e16
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
Oct 13 10:20:35.091: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config describe pod/hello-nodejs-1-build -n e2e-test-build-service-gtknd'
Oct 13 10:20:35.303: INFO: Describing pod "hello-nodejs-1-build"
Name:         hello-nodejs-1-build
Namespace:    e2e-test-build-service-gtknd
Priority:     0
Node:         ostest-n5rnf-worker-0-94fxs/10.196.2.169
Start Time:   Thu, 13 Oct 2022 10:13:42 +0000
Labels:       openshift.io/build.name=hello-nodejs-1
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.156.43"
                    ],
                    "mac": "fa:16:3e:6f:a4:3d",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.156.43"
                    ],
                    "mac": "fa:16:3e:6f:a4:3d",
                    "default": true,
                    "dns": {}
                }]
              openshift.io/build.name: hello-nodejs-1
              openshift.io/scc: privileged
Status:       Succeeded
IP:           10.128.156.43
IPs:
  IP:           10.128.156.43
Controlled By:  Build/hello-nodejs-1
Init Containers:
  git-clone:
    Container ID:  cri-o://c729937fef01e7d6f29729b43a1d9767dfc80ca637b73664fa44eb7950e21da7
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-git-clone
      --loglevel=0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 13 Oct 2022 10:14:35 +0000
      Finished:     Thu, 13 Oct 2022 10:14:38 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"hello-nodejs-1","namespace":"e2e-test-build-service-gtknd","uid":"a8be8f2a-247d-461f-8d9b-fc72b3619cb0","resourceVersion":"933500","generation":1,"creationTimestamp":"2022-10-13T10:13:41Z","labels":{"app":"hello-nodejs","app.kubernetes.io/component":"hello-nodejs","app.kubernetes.io/instance":"hello-nodejs","buildconfig":"hello-nodejs","openshift.io/build-config.name":"hello-nodejs","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"hello-nodejs","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"hello-nodejs","uid":"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"hello-nodejs"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:13:41Z","lastTransitionTime":"2022-10-13T10:13:41Z"}]}}
                                    
      LANG:                         C.utf8
      SOURCE_REPOSITORY:            https://github.com/sclorg/nodejs-ex.git
      SOURCE_URI:                   https://github.com/sclorg/nodejs-ex.git
      ALLOWED_UIDS:                 1-
      DROP_CAPS:                    KILL,MKNOD,SETGID,SETUID
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h2xw9 (ro)
  manage-dockerfile:
    Container ID:  cri-o://126f1c5511bfb728ab73dcc8291b5f23c0805c8d5c46191e2c109dce308d9377
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-manage-dockerfile
      --loglevel=0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 13 Oct 2022 10:14:39 +0000
      Finished:     Thu, 13 Oct 2022 10:14:39 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"hello-nodejs-1","namespace":"e2e-test-build-service-gtknd","uid":"a8be8f2a-247d-461f-8d9b-fc72b3619cb0","resourceVersion":"933500","generation":1,"creationTimestamp":"2022-10-13T10:13:41Z","labels":{"app":"hello-nodejs","app.kubernetes.io/component":"hello-nodejs","app.kubernetes.io/instance":"hello-nodejs","buildconfig":"hello-nodejs","openshift.io/build-config.name":"hello-nodejs","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"hello-nodejs","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"hello-nodejs","uid":"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"hello-nodejs"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:13:41Z","lastTransitionTime":"2022-10-13T10:13:41Z"}]}}
                                    
      LANG:                         C.utf8
      SOURCE_REPOSITORY:            https://github.com/sclorg/nodejs-ex.git
      SOURCE_URI:                   https://github.com/sclorg/nodejs-ex.git
      ALLOWED_UIDS:                 1-
      DROP_CAPS:                    KILL,MKNOD,SETGID,SETUID
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h2xw9 (ro)
Containers:
  sti-build:
    Container ID:  cri-o://dc5c93c0abb3e60ff1a2d3b7cd4fa15cccb73f375db947b57eeb488df72f2ba6
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-sti-build
      --loglevel=0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 13 Oct 2022 10:14:40 +0000
      Finished:     Thu, 13 Oct 2022 10:15:53 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"hello-nodejs-1","namespace":"e2e-test-build-service-gtknd","uid":"a8be8f2a-247d-461f-8d9b-fc72b3619cb0","resourceVersion":"933500","generation":1,"creationTimestamp":"2022-10-13T10:13:41Z","labels":{"app":"hello-nodejs","app.kubernetes.io/component":"hello-nodejs","app.kubernetes.io/instance":"hello-nodejs","buildconfig":"hello-nodejs","openshift.io/build-config.name":"hello-nodejs","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"hello-nodejs","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"hello-nodejs","uid":"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b474f51c-ebf9-4da5-850b-5c6ac5ebbd3f\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex.git"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"hello-nodejs"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:13:41Z","lastTransitionTime":"2022-10-13T10:13:41Z"}]}}
                                    
      LANG:                         C.utf8
      SOURCE_REPOSITORY:            https://github.com/sclorg/nodejs-ex.git
      SOURCE_URI:                   https://github.com/sclorg/nodejs-ex.git
      ALLOWED_UIDS:                 1-
      DROP_CAPS:                    KILL,MKNOD,SETGID,SETUID
      PUSH_DOCKERCFG_PATH:          /var/run/secrets/openshift.io/push
      PULL_DOCKERCFG_PATH:          /var/run/secrets/openshift.io/pull
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_STORAGE_DRIVER:         overlay
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/lib/containers/cache from buildcachedir (rw)
      /var/lib/containers/storage from container-storage-root (rw)
      /var/lib/kubelet/config.json from node-pullsecrets (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h2xw9 (ro)
      /var/run/secrets/openshift.io/pull from builder-dockercfg-kkd9h-pull (ro)
      /var/run/secrets/openshift.io/push from builder-dockercfg-kkd9h-push (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  node-pullsecrets:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/config.json
    HostPathType:  File
  buildcachedir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containers/cache
    HostPathType:  
  buildworkdir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  builder-dockercfg-kkd9h-push:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-kkd9h
    Optional:    false
  builder-dockercfg-kkd9h-pull:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-kkd9h
    Optional:    false
  build-system-configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hello-nodejs-1-sys-config
    Optional:  false
  build-ca-bundles:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hello-nodejs-1-ca
    Optional:  false
  build-proxy-ca-bundles:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hello-nodejs-1-global-ca
    Optional:  false
  container-storage-root:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  build-blob-cache:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-h2xw9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       6m53s  default-scheduler  Successfully assigned e2e-test-build-service-gtknd/hello-nodejs-1-build to ostest-n5rnf-worker-0-94fxs
  Normal  AddedInterface  6m11s  multus             Add eth0 [10.128.156.43/23] from kuryr
  Normal  Pulling         6m11s  kubelet            Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917"
  Normal  Pulled          6m1s   kubelet            Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" in 10.092883298s
  Normal  Created         6m     kubelet            Created container git-clone
  Normal  Started         6m     kubelet            Started container git-clone
  Normal  Pulled          5m57s  kubelet            Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
  Normal  Created         5m56s  kubelet            Created container manage-dockerfile
  Normal  Started         5m56s  kubelet            Started container manage-dockerfile
  Normal  Pulled          5m56s  kubelet            Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
  Normal  Created         5m55s  kubelet            Created container sti-build
  Normal  Started         5m55s  kubelet            Started container sti-build


Oct 13 10:20:35.303: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs pod/hello-nodejs-1-build -c git-clone -n e2e-test-build-service-gtknd'
Oct 13 10:20:35.529: INFO: Log for pod "hello-nodejs-1-build"/"git-clone"
---->
Cloning "https://github.com/sclorg/nodejs-ex.git" ...
	Commit:	5b6d6ae91071551476de2322fea621fc51c1d73a (Merge pull request #264 from multi-arch/samples)
	Author:	Petr Hracek <phracek@redhat.com>
	Date:	Wed May 11 08:49:58 2022 +0200
<----end of log for "hello-nodejs-1-build"/"git-clone"

Oct 13 10:20:35.529: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs pod/hello-nodejs-1-build -c manage-dockerfile -n e2e-test-build-service-gtknd'
Oct 13 10:20:35.733: INFO: Log for pod "hello-nodejs-1-build"/"manage-dockerfile"
---->

<----end of log for "hello-nodejs-1-build"/"manage-dockerfile"

Oct 13 10:20:35.733: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs pod/hello-nodejs-1-build -c sti-build -n e2e-test-build-service-gtknd'
Oct 13 10:20:35.925: INFO: Log for pod "hello-nodejs-1-build"/"sti-build"
---->
time="2022-10-13T10:14:41Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
I1013 10:14:41.815598       1 defaults.go:102] Defaulting to storage driver "overlay" with options [mountopt=metacopy=on].
Caching blobs under "/var/cache/blobs".
Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed...
Getting image source signatures
Copying blob sha256:809fe483e88523e7021d76b001a552856f216430023bdc0aeff8fce8df385535
Copying blob sha256:1b3417e31a5e0e64f861e121d4efed3152e75aaa85026cd784cd0070e063daa3
Copying blob sha256:36bead343ed7bbdf6c0b72c3914b13a81201e129d6e8365d42c23a1d85bbe03c
Copying blob sha256:ce3a003fc2c2e9d3655d24888bcf5c86618c9803315603e06076eec0e2d9f3fe
Copying config sha256:33ddc208152851493b4416ed17d4b97a1ae087db28f648a4802508a52e3b0762
Writing manifest to image destination
Storing signatures
Generating dockerfile with builder image image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed
Adding transient rw bind mount for /run/secrets/rhsm
STEP 1/9: FROM image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed
STEP 2/9: LABEL "io.openshift.build.source-location"="https://github.com/sclorg/nodejs-ex.git"       "io.openshift.build.image"="image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"       "io.openshift.build.commit.author"="Petr Hracek <phracek@redhat.com>"       "io.openshift.build.commit.date"="Wed May 11 08:49:58 2022 +0200"       "io.openshift.build.commit.id"="5b6d6ae91071551476de2322fea621fc51c1d73a"       "io.openshift.build.commit.ref"="master"       "io.openshift.build.commit.message"="Merge pull request #264 from multi-arch/samples"
STEP 3/9: ENV OPENSHIFT_BUILD_NAME="hello-nodejs-1"     OPENSHIFT_BUILD_NAMESPACE="e2e-test-build-service-gtknd"     OPENSHIFT_BUILD_SOURCE="https://github.com/sclorg/nodejs-ex.git"     OPENSHIFT_BUILD_COMMIT="5b6d6ae91071551476de2322fea621fc51c1d73a"
STEP 4/9: USER root
STEP 5/9: COPY upload/src /tmp/src
STEP 6/9: RUN chown -R 1001:0 /tmp/src
STEP 7/9: USER 1001
STEP 8/9: RUN /usr/libexec/s2i/assemble
---> Installing application source ...
---> Installing all dependencies
npm WARN deprecated superagent@1.2.0: Please upgrade to v7.0.2+ of superagent.  We have fixed numerous issues with streams, form-data, attach(), filesystem errors not bubbling up (ENOENT on attach()), and all tests are now passing.  See the releases tab for more information at <https://github.com/visionmedia/superagent/releases>.
npm WARN deprecated mkdirp@0.5.1: Legacy versions of mkdirp are no longer supported. Please update to mkdirp 1.x. (Note that the API surface has changed to use Promises in 1.x.)
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
npm WARN deprecated bson@1.0.9: Fixed a critical issue with BSON serialization documented in CVE-2019-2391, see https://bit.ly/2KcpXdo for more details
npm WARN deprecated formidable@1.0.14: Please upgrade to latest, formidable@v2 or formidable@v3! Check these notes: https://bit.ly/2ZEqIau
npm WARN deprecated to-iso-string@0.0.2: to-iso-string has been deprecated, use @segment/to-iso-string instead.
npm WARN deprecated mkdirp@0.3.0: Legacy versions of mkdirp are no longer supported. Please update to mkdirp 1.x. (Note that the API surface has changed to use Promises in 1.x.)
npm WARN deprecated minimatch@0.3.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue

> ejs@2.7.4 postinstall /opt/app-root/src/node_modules/ejs
> node ./postinstall.js

Thank you for installing �[35mEJS�[0m: built with the �[32mJake�[0m JavaScript build tool (�[32mhttps://jakejs.com/�[0m)

npm notice created a lockfile as package-lock.json. You should commit this file.
added 129 packages from 345 contributors and audited 129 packages in 7.889s

7 packages are looking for funding
  run `npm fund` for details

found 15 vulnerabilities (2 low, 4 moderate, 6 high, 3 critical)
  run `npm audit fix` to fix them, or `npm audit` for details
---> Building in production mode
---> Pruning the development dependencies
audited 129 packages in 1.27s

7 packages are looking for funding
  run `npm fund` for details

found 15 vulnerabilities (2 low, 4 moderate, 6 high, 3 critical)
  run `npm audit fix` to fix them, or `npm audit` for details
/tmp is not a mountpoint
---> Cleaning the /tmp/npm-*
/opt/app-root/src/.npm is not a mountpoint
---> Cleaning the npm cache /opt/app-root/src/.npm
STEP 9/9: CMD /usr/libexec/s2i/run
COMMIT temp.builder.openshift.io/e2e-test-build-service-gtknd/hello-nodejs-1:f475f1a4
time="2022-10-13T10:15:48Z" level=warning msg="Adding metacopy option, configured globally"
Getting image source signatures
Copying blob sha256:b38cb92596778e2c18c2bde15f229772fe794af39345dd456c3bf6702cc11eef
Copying blob sha256:23e15b9ab3f0ef87e5fd30f1ce0fb91d39ceea2d903dce104620a24a5a551b77
Copying blob sha256:5863c9bfd6aff8171d8b37cb09cbcf77ec0228f0b0acd7eaf69f561882217284
Copying blob sha256:86558118bc6a1ad01772df1eeab2c2ce5445b26e584f90a705adc5f22a53a380
Copying blob sha256:376f659b89689e9fd6c155d36a53ef300b546b096d7814345ff01310f1c84c0a
Copying config sha256:a56ed07be0e5a19415e05899633467d9cfd889ab804ab80a4721925ff0cea073
Writing manifest to image destination
Storing signatures
--> a56ed07be0e
Successfully tagged temp.builder.openshift.io/e2e-test-build-service-gtknd/hello-nodejs-1:f475f1a4
a56ed07be0e5a19415e05899633467d9cfd889ab804ab80a4721925ff0cea073

Pushing image image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs:latest ...
Getting image source signatures
Copying blob sha256:376f659b89689e9fd6c155d36a53ef300b546b096d7814345ff01310f1c84c0a
Copying blob sha256:36bead343ed7bbdf6c0b72c3914b13a81201e129d6e8365d42c23a1d85bbe03c
Copying blob sha256:809fe483e88523e7021d76b001a552856f216430023bdc0aeff8fce8df385535
Copying blob sha256:ce3a003fc2c2e9d3655d24888bcf5c86618c9803315603e06076eec0e2d9f3fe
Copying blob sha256:1b3417e31a5e0e64f861e121d4efed3152e75aaa85026cd784cd0070e063daa3
Copying config sha256:a56ed07be0e5a19415e05899633467d9cfd889ab804ab80a4721925ff0cea073
Writing manifest to image destination
Storing signatures
Successfully pushed image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e
Push successful
<----end of log for "hello-nodejs-1-build"/"sti-build"

Oct 13 10:20:35.925: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config describe pod/hello-nodejs-78679dbb86-7j7fd -n e2e-test-build-service-gtknd'
Oct 13 10:20:36.126: INFO: Describing pod "hello-nodejs-78679dbb86-7j7fd"
Name:         hello-nodejs-78679dbb86-7j7fd
Namespace:    e2e-test-build-service-gtknd
Priority:     0
Node:         ostest-n5rnf-worker-0-8kq82/10.196.2.72
Start Time:   Thu, 13 Oct 2022 10:15:53 +0000
Labels:       deployment=hello-nodejs
              pod-template-hash=78679dbb86
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.157.248"
                    ],
                    "mac": "fa:16:3e:c0:02:96",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.157.248"
                    ],
                    "mac": "fa:16:3e:c0:02:96",
                    "default": true,
                    "dns": {}
                }]
              openshift.io/generated-by: OpenShiftNewApp
              openshift.io/scc: restricted
Status:       Running
IP:           10.128.157.248
IPs:
  IP:           10.128.157.248
Controlled By:  ReplicaSet/hello-nodejs-78679dbb86
Containers:
  hello-nodejs:
    Container ID:   cri-o://e305318ec6f36ebe471b3be8974f9c98d63e3a16bc5eb08f4c7bee061c7e8e52
    Image:          image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e
    Image ID:       image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 13 Oct 2022 10:17:06 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nrsp2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-nrsp2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       4m42s  default-scheduler  Successfully assigned e2e-test-build-service-gtknd/hello-nodejs-78679dbb86-7j7fd to ostest-n5rnf-worker-0-8kq82
  Normal  AddedInterface  3m48s  multus             Add eth0 [10.128.157.248/23] from kuryr
  Normal  Pulling         3m48s  kubelet            Pulling image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e"
  Normal  Pulled          3m31s  kubelet            Successfully pulled image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e" in 17.546769207s
  Normal  Created         3m30s  kubelet            Created container hello-nodejs
  Normal  Started         3m30s  kubelet            Started container hello-nodejs


Oct 13 10:20:36.126: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs pod/hello-nodejs-78679dbb86-7j7fd -c hello-nodejs -n e2e-test-build-service-gtknd'
Oct 13 10:20:36.292: INFO: Log for pod "hello-nodejs-78679dbb86-7j7fd"/"hello-nodejs"
---->
Environment: 
	DEV_MODE=false
	NODE_ENV=production
	DEBUG_PORT=5858
Launching via npm...
npm info it worked if it ends with ok
npm info using npm@6.14.17
npm info using node@v14.20.0
npm info lifecycle nodejs-ex@0.0.1~prestart: nodejs-ex@0.0.1
npm info lifecycle nodejs-ex@0.0.1~start: nodejs-ex@0.0.1

> nodejs-ex@0.0.1 start /opt/app-root/src
> node server.js

Server running on http://0.0.0.0:8080
<----end of log for "hello-nodejs-78679dbb86-7j7fd"/"hello-nodejs"

Oct 13 10:20:36.293: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config describe pod/test-1-build -n e2e-test-build-service-gtknd'
Oct 13 10:20:36.458: INFO: Describing pod "test-1-build"
Name:         test-1-build
Namespace:    e2e-test-build-service-gtknd
Priority:     0
Node:         ostest-n5rnf-worker-0-8kq82/10.196.2.72
Start Time:   Thu, 13 Oct 2022 10:17:08 +0000
Labels:       openshift.io/build.name=test-1
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.157.22"
                    ],
                    "mac": "fa:16:3e:23:da:ef",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "kuryr",
                    "interface": "eth0",
                    "ips": [
                        "10.128.157.22"
                    ],
                    "mac": "fa:16:3e:23:da:ef",
                    "default": true,
                    "dns": {}
                }]
              openshift.io/build.name: test-1
              openshift.io/scc: privileged
Status:       Failed
IP:           10.128.157.22
IPs:
  IP:           10.128.157.22
Controlled By:  Build/test-1
Init Containers:
  manage-dockerfile:
    Container ID:  cri-o://8c61b3e367ee156d61d63eb22a9aea91f598fb8c5ce4403a4d222065f90f994c
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-manage-dockerfile
      --loglevel=0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 13 Oct 2022 10:17:45 +0000
      Finished:     Thu, 13 Oct 2022 10:17:45 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-1","namespace":"e2e-test-build-service-gtknd","uid":"31cfb654-58df-419c-b6f9-d6d51803798a","resourceVersion":"941148","generation":1,"creationTimestamp":"2022-10-13T10:17:08Z","labels":{"build":"test","buildconfig":"test","openshift.io/build-config.name":"test","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test","uid":"ce193f4e-89d4-4552-9e49-5e309e006da9","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:17:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce193f4e-89d4-4552-9e49-5e309e006da9\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"\nFROM image-registry.openshift-image-registry.svc:5000/openshift/tools:latest\nRUN cat /etc/resolv.conf\nRUN curl -vvv hello-nodejs:8080\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Build configuration change"}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"test"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:17:08Z","lastTransitionTime":"2022-10-13T10:17:08Z"}]}}
                                    
      LANG:                         C.utf8
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhgbb (ro)
Containers:
  docker-build:
    Container ID:  cri-o://1330a6c507d6bda1d8517057d3a0a7d60ca0e5cb0d0d4f7b6116125030cf6efc
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917
    Port:          <none>
    Host Port:     <none>
    Args:
      openshift-docker-build
      --loglevel=0
    State:      Terminated
      Reason:   Error
      Message:   0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0* connect to 172.30.35.175 port 8080 failed: Connection timed out
* Failed to connect to hello-nodejs port 8080: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to hello-nodejs port 8080: Connection timed out
error: build error: error building at STEP "RUN curl -vvv hello-nodejs:8080": error while running runtime: exit status 7

      Exit Code:    1
      Started:      Thu, 13 Oct 2022 10:17:46 +0000
      Finished:     Thu, 13 Oct 2022 10:20:31 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      BUILD:                        {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"test-1","namespace":"e2e-test-build-service-gtknd","uid":"31cfb654-58df-419c-b6f9-d6d51803798a","resourceVersion":"941148","generation":1,"creationTimestamp":"2022-10-13T10:17:08Z","labels":{"build":"test","buildconfig":"test","openshift.io/build-config.name":"test","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"test","uid":"ce193f4e-89d4-4552-9e49-5e309e006da9","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:17:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce193f4e-89d4-4552-9e49-5e309e006da9\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"\nFROM image-registry.openshift-image-registry.svc:5000/openshift/tools:latest\nRUN cat /etc/resolv.conf\nRUN curl -vvv hello-nodejs:8080\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6"},"pullSecret":{"name":"builder-dockercfg-kkd9h"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest"},"pushSecret":{"name":"builder-dockercfg-kkd9h"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Build configuration change"}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/test:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-build-service-gtknd","name":"test"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:17:08Z","lastTransitionTime":"2022-10-13T10:17:08Z"}]}}
                                    
      LANG:                         C.utf8
      PUSH_DOCKERCFG_PATH:          /var/run/secrets/openshift.io/push
      PULL_DOCKERCFG_PATH:          /var/run/secrets/openshift.io/pull
      BUILD_REGISTRIES_CONF_PATH:   /var/run/configs/openshift.io/build-system/registries.conf
      BUILD_REGISTRIES_DIR_PATH:    /var/run/configs/openshift.io/build-system/registries.d
      BUILD_SIGNATURE_POLICY_PATH:  /var/run/configs/openshift.io/build-system/policy.json
      BUILD_STORAGE_CONF_PATH:      /var/run/configs/openshift.io/build-system/storage.conf
      BUILD_STORAGE_DRIVER:         overlay
      BUILD_BLOBCACHE_DIR:          /var/cache/blobs
      HTTP_PROXY:                   
      HTTPS_PROXY:                  
      NO_PROXY:                     
    Mounts:
      /tmp/build from buildworkdir (rw)
      /var/cache/blobs from build-blob-cache (rw)
      /var/lib/containers/cache from buildcachedir (rw)
      /var/lib/containers/storage from container-storage-root (rw)
      /var/lib/kubelet/config.json from node-pullsecrets (rw)
      /var/run/configs/openshift.io/build-system from build-system-configs (ro)
      /var/run/configs/openshift.io/certs from build-ca-bundles (rw)
      /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhgbb (ro)
      /var/run/secrets/openshift.io/pull from builder-dockercfg-kkd9h-pull (ro)
      /var/run/secrets/openshift.io/push from builder-dockercfg-kkd9h-push (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  buildcachedir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containers/cache
    HostPathType:  
  buildworkdir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  node-pullsecrets:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/config.json
    HostPathType:  File
  builder-dockercfg-kkd9h-push:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-kkd9h
    Optional:    false
  builder-dockercfg-kkd9h-pull:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  builder-dockercfg-kkd9h
    Optional:    false
  build-system-configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-1-sys-config
    Optional:  false
  build-ca-bundles:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-1-ca
    Optional:  false
  build-proxy-ca-bundles:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      test-1-global-ca
    Optional:  false
  container-storage-root:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  build-blob-cache:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-vhgbb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       3m27s  default-scheduler  Successfully assigned e2e-test-build-service-gtknd/test-1-build to ostest-n5rnf-worker-0-8kq82
  Normal  AddedInterface  3m23s  multus             Add eth0 [10.128.157.22/23] from kuryr
  Normal  Pulling         3m23s  kubelet            Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917"
  Normal  Pulled          2m51s  kubelet            Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" in 31.971347925s
  Normal  Created         2m51s  kubelet            Created container manage-dockerfile
  Normal  Started         2m51s  kubelet            Started container manage-dockerfile
  Normal  Pulled          2m50s  kubelet            Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
  Normal  Created         2m50s  kubelet            Created container docker-build
  Normal  Started         2m50s  kubelet            Started container docker-build


Oct 13 10:20:36.458: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs pod/test-1-build -c manage-dockerfile -n e2e-test-build-service-gtknd'
Oct 13 10:20:36.636: INFO: Log for pod "test-1-build"/"manage-dockerfile"
---->
Replaced Dockerfile FROM image image-registry.openshift-image-registry.svc:5000/openshift/tools:latest
<----end of log for "test-1-build"/"manage-dockerfile"

Oct 13 10:20:36.636: INFO: Running 'oc --namespace=e2e-test-build-service-gtknd --kubeconfig=.kube/config logs pod/test-1-build -c docker-build -n e2e-test-build-service-gtknd'
Oct 13 10:20:36.809: INFO: Log for pod "test-1-build"/"docker-build"
---->
time="2022-10-13T10:17:48Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
I1013 10:17:48.628428       1 defaults.go:102] Defaulting to storage driver "overlay" with options [mountopt=metacopy=on].
Caching blobs under "/var/cache/blobs".

Pulling image image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6 ...
Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6...
Getting image source signatures
Copying blob sha256:a2f3f5a14ad25b6ea4a3484161d2fb21e924b5fa662c4fc429d711326af500e2
Copying blob sha256:46ccf5d9b3e4a94e85bfed87163ba4707c06afe97a712db5e466d38d160ecfc1
Copying blob sha256:d033ae3b9132332cad930a5e3a796b1b70903b6f86a069aea1dcdc3cf4c2909e
Copying blob sha256:a80a503a1f95aeefc804ebe15440205f00c2682b566b3f41ff21f7922607f4f7
Copying blob sha256:237bfbffb5f297018ef21e92b8fede75d3ca63e2154236331ef2b2a9dd818a02
Copying blob sha256:39382676eb30fabb7a0616b064e142f6ef58d45216a9124e9358d14b12dedd65
Copying config sha256:de1ef0c021bf845d199099d776f711f71801769970d2548f72e44e75e86be7c1
Writing manifest to image destination
Storing signatures
Adding transient rw bind mount for /run/secrets/rhsm
STEP 1/5: FROM image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6
STEP 2/5: RUN cat /etc/resolv.conf
search e2e-test-build-service-gtknd.svc.cluster.local svc.cluster.local cluster.local ostest.shiftstack.com shiftstack.com
nameserver 172.30.0.10
options ndots:5
time="2022-10-13T10:18:17Z" level=warning msg="Adding metacopy option, configured globally"
--> 099c166becd
STEP 3/5: RUN curl -vvv hello-nodejs:8080
* Rebuilt URL to: hello-nodejs:8080/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 172.30.35.175...
* TCP_NODELAY set

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0* connect to 172.30.35.175 port 8080 failed: Connection timed out
* Failed to connect to hello-nodejs port 8080: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to hello-nodejs port 8080: Connection timed out
error: build error: error building at STEP "RUN curl -vvv hello-nodejs:8080": error while running runtime: exit status 7
<----end of log for "test-1-build"/"docker-build"

[AfterEach] [sig-builds][Feature:Builds] build can reference a cluster service
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-build-service-gtknd".
STEP: Found 47 events.
Oct 13 10:20:36.816: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hello-nodejs-1-build: { } Scheduled: Successfully assigned e2e-test-build-service-gtknd/hello-nodejs-1-build to ostest-n5rnf-worker-0-94fxs
Oct 13 10:20:36.816: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hello-nodejs-78679dbb86-7j7fd: { } Scheduled: Successfully assigned e2e-test-build-service-gtknd/hello-nodejs-78679dbb86-7j7fd to ostest-n5rnf-worker-0-8kq82
Oct 13 10:20:36.816: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-1-build: { } Scheduled: Successfully assigned e2e-test-build-service-gtknd/test-1-build to ostest-n5rnf-worker-0-8kq82
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs: {deployment-controller } ScalingReplicaSet: Scaled up replica set hello-nodejs-75466689c to 1
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-qjq8r" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-cgc6p" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-b92f6" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-6gskv" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-zhdsl" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:41 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-rj8qm" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:42 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-d87c6" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:42 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-967q8" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:43 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: Error creating: Pod "hello-nodejs-75466689c-tdvvm" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:13:44 +0000 UTC - event for hello-nodejs-75466689c: {replicaset-controller } FailedCreate: (combined from similar events): Error creating: Pod "hello-nodejs-75466689c-sqwn8" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:24 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Pulling: Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917"
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:24 +0000 UTC - event for hello-nodejs-1-build: {multus } AddedInterface: Add eth0 [10.128.156.43/23] from kuryr
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:34 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" in 10.092883298s
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:35 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container git-clone
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:35 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container git-clone
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:36 +0000 UTC - event for hello-nodejs-1: {build-controller } BuildStarted: Build e2e-test-build-service-gtknd/hello-nodejs-1 is now running
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:38 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:39 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:39 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container manage-dockerfile
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:39 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container manage-dockerfile
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:40 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container sti-build
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:14:40 +0000 UTC - event for hello-nodejs-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container sti-build
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:15:53 +0000 UTC - event for hello-nodejs: {deployment-controller } ScalingReplicaSet: Scaled up replica set hello-nodejs-78679dbb86 to 1
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:15:53 +0000 UTC - event for hello-nodejs-78679dbb86: {replicaset-controller } SuccessfulCreate: Created pod: hello-nodejs-78679dbb86-7j7fd
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:15:56 +0000 UTC - event for hello-nodejs-1: {build-controller } BuildCompleted: Build e2e-test-build-service-gtknd/hello-nodejs-1 completed successfully
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:16:48 +0000 UTC - event for hello-nodejs-78679dbb86-7j7fd: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e"
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:16:48 +0000 UTC - event for hello-nodejs-78679dbb86-7j7fd: {multus } AddedInterface: Add eth0 [10.128.157.248/23] from kuryr
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:05 +0000 UTC - event for hello-nodejs-78679dbb86-7j7fd: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-service-gtknd/hello-nodejs@sha256:b16e773674020544d76958a2b2a53bc9c98f5c3cf9f6b46020cd18f17afe133e" in 17.546769207s
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:06 +0000 UTC - event for hello-nodejs: {deployment-controller } ScalingReplicaSet: Scaled down replica set hello-nodejs-75466689c to 0
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:06 +0000 UTC - event for hello-nodejs-78679dbb86-7j7fd: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container hello-nodejs
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:06 +0000 UTC - event for hello-nodejs-78679dbb86-7j7fd: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container hello-nodejs
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:08 +0000 UTC - event for test: {buildconfig-controller } BuildConfigInstantiateFailed: error instantiating Build from BuildConfig e2e-test-build-service-gtknd/test (0): Error resolving ImageStreamTag tools:latest in namespace e2e-test-build-service-gtknd: unable to find latest tagged image
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:08 +0000 UTC - event for test: {buildconfig-controller } BuildConfigTriggerFailed: error triggering Build for BuildConfig e2e-test-build-service-gtknd/test: Internal error occurred: build config e2e-test-build-service-gtknd/test has already instantiated a build for imageid image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:13 +0000 UTC - event for test-1-build: {multus } AddedInterface: Add eth0 [10.128.157.22/23] from kuryr
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:13 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917"
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:45 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container manage-dockerfile
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:45 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container manage-dockerfile
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:45 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" in 31.971347925s
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:46 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container docker-build
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:46 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container docker-build
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:46 +0000 UTC - event for test-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:17:47 +0000 UTC - event for test-1: {build-controller } BuildStarted: Build e2e-test-build-service-gtknd/test-1 is now running
Oct 13 10:20:36.816: INFO: At 2022-10-13 10:20:31 +0000 UTC - event for test-1: {build-controller } BuildFailed: Build e2e-test-build-service-gtknd/test-1 failed
Oct 13 10:20:36.824: INFO: POD                            NODE                         PHASE      GRACE  CONDITIONS
Oct 13 10:20:36.824: INFO: hello-nodejs-1-build           ostest-n5rnf-worker-0-94fxs  Succeeded         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:14:39 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:15:53 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:15:53 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:13:42 +0000 UTC  }]
Oct 13 10:20:36.824: INFO: hello-nodejs-78679dbb86-7j7fd  ostest-n5rnf-worker-0-8kq82  Running           [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:15:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:15:53 +0000 UTC  }]
Oct 13 10:20:36.824: INFO: test-1-build                   ostest-n5rnf-worker-0-8kq82  Failed            [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:20:32 +0000 UTC ContainersNotReady containers with unready status: [docker-build]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:20:32 +0000 UTC ContainersNotReady containers with unready status: [docker-build]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:08 +0000 UTC  }]
Oct 13 10:20:36.824: INFO: 
Oct 13 10:20:36.831: INFO: skipping dumping cluster info - cluster too large
Oct 13 10:20:36.852: INFO: Deleted {user.openshift.io/v1, Resource=users  e2e-test-build-service-gtknd-user}, err: <nil>
Oct 13 10:20:36.866: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-build-service-gtknd}, err: <nil>
Oct 13 10:20:36.877: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~YdlreoZuVz3sVunvgCO0hjE1BiO0Nu3DAjsHWips5Mg}, err: <nil>
[AfterEach] [sig-builds][Feature:Builds] build can reference a cluster service
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-build-service-gtknd" for this suite.
fail [github.com/openshift/origin/test/extended/builds/service.go:80]: Unexpected error:
    <*errors.errorString | 0xc002105250>: {
        s: "The build \"test-1\" status is \"Failed\"",
    }
    The build "test-1" status is "Failed"
occurred

Stderr
_sig-builds__Feature_Builds__valueFrom__process_valueFrom_in_build_strategy_environment_variables__should_successfully_resolve_valueFrom_in_s2i_build_environment_variables__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 113.0s

_sig-instrumentation__Late__Alerts_shouldn't_report_any_alerts_in_firing_or_pending_state_apart_from_Watchdog_and_AlertmanagerReceiversNotConfigured_and_have_no_gaps_in_Watchdog_firing__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 29.8s

_sig-api-machinery__Feature_APIServer__Late__kube-apiserver_terminates_within_graceful_termination_period__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-etcd__etcd_leader_changes_are_not_excessive__Late___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.8s

Failed:
fail [github.com/openshift/origin/test/extended/etcd/leader_changes.go:36]: Unexpected error:
    <*errors.errorString | 0xc0019d2b60>: {
        s: "expecting Prometheus query to return at least one item, got 0 instead",
    }
    expecting Prometheus query to return at least one item, got 0 instead
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-etcd] etcd
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] leader changes are not excessive [Late] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/etcd/leader_changes.go:19
STEP: Examining the number of etcd leadership changes over the run
[AfterEach] [sig-etcd] etcd
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-etcd] etcd
  github.com/openshift/origin/test/extended/util/client.go:141
fail [github.com/openshift/origin/test/extended/etcd/leader_changes.go:36]: Unexpected error:
    <*errors.errorString | 0xc0019d2b60>: {
        s: "expecting Prometheus query to return at least one item, got 0 instead",
    }
    expecting Prometheus query to return at least one item, got 0 instead
occurred

Stderr
_sig-node__Managed_cluster_should_report_ready_nodes_the_entire_duration_of_the_test_run__Late___Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 46.8s

_sig-node__Late__should_not_have_pod_creation_failures_due_to_systemd_timeouts__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.8s

_sig-api-machinery__Feature_APIServer__Late__kubelet_terminates_kube-apiserver_gracefully__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.8s

_sig-storage__Late__Metrics_should_report_short_attach_times__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 29.5s

_sig-storage__Late__Metrics_should_report_short_mount_times__Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 32.4s

_sig-instrumentation__Prometheus_when_installed_on_the_cluster_should_report_telemetry_if_a_cloud.openshift.com_token_is_present__Late___Skipped_Disconnected___Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 84.0s

Failed:
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:282]: Unexpected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "promQL query returned unexpected results:\nfederate_samples{job=\"telemeter-client\"} >= 10\n[]",
        },
        {
            s: "promQL query returned unexpected results:\nmetricsclient_request_send{client=\"federate_to\",job=\"telemeter-client\",status_code=\"200\"} >= 1\n[]",
        },
    ]
    [promQL query returned unexpected results:
    federate_samples{job="telemeter-client"} >= 10
    [], promQL query returned unexpected results:
    metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
    []]
occurred

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[BeforeEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:250
[It] should report telemetry if a cloud.openshift.com token is present [Late] [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/prometheus/prometheus.go:259
Oct 13 10:40:37.950: INFO: Creating namespace "e2e-test-prometheus-pvkxc"
Oct 13 10:40:38.261: INFO: Waiting for ServiceAccount "default" to be provisioned...
Oct 13 10:40:38.383: INFO: Creating new exec pod
STEP: perform prometheus metric query federate_samples{job="telemeter-client"} >= 10
Oct 13 10:41:04.504: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10"'
Oct 13 10:41:04.855: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10'\n"
Oct 13 10:41:04.855: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
Oct 13 10:41:04.856: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1"'
Oct 13 10:41:05.186: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1'\n"
Oct 13 10:41:05.186: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
Oct 13 10:41:15.188: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1"'
Oct 13 10:41:15.490: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1'\n"
Oct 13 10:41:15.490: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query federate_samples{job="telemeter-client"} >= 10
Oct 13 10:41:15.490: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10"'
Oct 13 10:41:15.769: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10'\n"
Oct 13 10:41:15.770: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
Oct 13 10:41:25.771: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1"'
Oct 13 10:41:26.144: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1'\n"
Oct 13 10:41:26.144: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query federate_samples{job="telemeter-client"} >= 10
Oct 13 10:41:26.144: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10"'
Oct 13 10:41:26.478: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10'\n"
Oct 13 10:41:26.478: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
Oct 13 10:41:36.481: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1"'
Oct 13 10:41:36.788: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1'\n"
Oct 13 10:41:36.788: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query federate_samples{job="telemeter-client"} >= 10
Oct 13 10:41:36.789: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10"'
Oct 13 10:41:37.056: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10'\n"
Oct 13 10:41:37.056: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
Oct 13 10:41:47.058: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1"'
Oct 13 10:41:47.381: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=metricsclient_request_send%7Bclient%3D%22federate_to%22%2Cjob%3D%22telemeter-client%22%2Cstatus_code%3D%22200%22%7D+%3E%3D+1'\n"
Oct 13 10:41:47.381: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
STEP: perform prometheus metric query federate_samples{job="telemeter-client"} >= 10
Oct 13 10:41:47.381: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-pvkxc exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10"'
Oct 13 10:41:47.697: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=federate_samples%7Bjob%3D%22telemeter-client%22%7D+%3E%3D+10'\n"
Oct 13 10:41:47.698: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n"
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:140
STEP: Collecting events from namespace "e2e-test-prometheus-pvkxc".
STEP: Found 5 events.
Oct 13 10:41:57.750: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-pvkxc/execpod to ostest-n5rnf-worker-0-j4pkp
Oct 13 10:41:57.750: INFO: At 2022-10-13 10:41:03 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.209.169/23] from kuryr
Oct 13 10:41:57.750: INFO: At 2022-10-13 10:41:03 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine
Oct 13 10:41:57.750: INFO: At 2022-10-13 10:41:03 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container
Oct 13 10:41:57.750: INFO: At 2022-10-13 10:41:03 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container
Oct 13 10:41:57.754: INFO: POD      NODE                         PHASE    GRACE  CONDITIONS
Oct 13 10:41:57.755: INFO: execpod  ostest-n5rnf-worker-0-j4pkp  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:40:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:41:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:41:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:40:38 +0000 UTC  }]
Oct 13 10:41:57.755: INFO: 
Oct 13 10:41:57.764: INFO: skipping dumping cluster info - cluster too large
[AfterEach] [sig-instrumentation] Prometheus
  github.com/openshift/origin/test/extended/util/client.go:141
STEP: Destroying namespace "e2e-test-prometheus-pvkxc" for this suite.
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:282]: Unexpected error:
    <errors.aggregate | len:2, cap:2>: [
        {
            s: "promQL query returned unexpected results:\nfederate_samples{job=\"telemeter-client\"} >= 10\n[]",
        },
        {
            s: "promQL query returned unexpected results:\nmetricsclient_request_send{client=\"federate_to\",job=\"telemeter-client\",status_code=\"200\"} >= 1\n[]",
        },
    ]
    [promQL query returned unexpected results:
    federate_samples{job="telemeter-client"} >= 10
    [], promQL query returned unexpected results:
    metricsclient_request_send{client="federate_to",job="telemeter-client",status_code="200"} >= 1
    []]
occurred

Stderr
_sig-api-machinery__Feature_APIServer__Late__API_LBs_follow_/readyz_of_kube-apiserver_and_stop_sending_requests__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.7s

_sig-api-machinery__Feature_APIServer__Late__API_LBs_follow_/readyz_of_kube-apiserver_and_don't_send_request_early__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 0.5s

_sig-arch__Late__operators_should_not_create_watch_channels_very_often__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.1s

Failed:
fail [github.com/openshift/origin/test/extended/apiserver/api_requests.go:437]: Expected
    <bool>: true
not to be true

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] operators should not create watch channels very often [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/apiserver/api_requests.go:93
Oct 13 10:40:36.877: INFO: operator=authentication-operator, watchrequestcount=456, upperbound=594, ratio=0.7676767676767676
Oct 13 10:40:36.877: INFO: operator=ingress-operator, watchrequestcount=426, upperbound=830, ratio=0.5132530120481927
Oct 13 10:40:36.877: INFO: operator=openshift-apiserver-operator, watchrequestcount=369, upperbound=440, ratio=0.8386363636363636
Oct 13 10:40:36.877: INFO: operator=kube-apiserver-operator, watchrequestcount=326, upperbound=488, ratio=0.6680327868852459
Oct 13 10:40:36.877: INFO: operator=cluster-storage-operator, watchrequestcount=242, upperbound=394, ratio=0.6142131979695431
Oct 13 10:40:36.877: INFO: operator=kube-controller-manager-operator, watchrequestcount=238, upperbound=366, ratio=0.6502732240437158
Oct 13 10:40:36.877: INFO: operator=openshift-controller-manager-operator, watchrequestcount=236, upperbound=418, ratio=0.5645933014354066
Oct 13 10:40:36.877: INFO: operator=openshift-kube-scheduler-operator, watchrequestcount=212, upperbound=258, ratio=0.8217054263565892
Oct 13 10:40:36.877: INFO: operator=etcd-operator, watchrequestcount=205, upperbound=280, ratio=0.7321428571428571
Oct 13 10:40:36.877: INFO: operator=console-operator, watchrequestcount=191, upperbound=250, ratio=0.764
Oct 13 10:40:36.877: INFO: operator=manila-csi-driver-operator, watchrequestcount=186, upperbound=1230, ratio=0.15121951219512195
Oct 13 10:40:36.877: INFO: operator=service-ca-operator, watchrequestcount=157, upperbound=226, ratio=0.6946902654867256
Oct 13 10:40:36.877: INFO: operator=openstack-cinder-csi-driver-operator, watchrequestcount=144, upperbound=1112, ratio=0.12949640287769784
Oct 13 10:40:36.877: INFO: operator=cluster-image-registry-operator, watchrequestcount=138, upperbound=214, ratio=0.6448598130841121
Oct 13 10:40:36.877: INFO: operator=prometheus-operator, watchrequestcount=137, upperbound=216, ratio=0.6342592592592593
Oct 13 10:40:36.877: INFO: operator=cluster-monitoring-operator, watchrequestcount=84, upperbound=82, ratio=1.024390243902439
Oct 13 10:40:36.877: INFO: Operator cluster-monitoring-operator produces more watch requests than expected
Oct 13 10:40:36.877: INFO: operator=csi-snapshot-controller-operator, watchrequestcount=81, upperbound=98, ratio=0.826530612244898
Oct 13 10:40:36.877: INFO: operator=machine-api-operator, watchrequestcount=67, upperbound=122, ratio=0.5491803278688525
Oct 13 10:40:36.877: INFO: operator=cluster-autoscaler-operator, watchrequestcount=60, upperbound=90, ratio=0.6666666666666666
Oct 13 10:40:36.877: INFO: operator=openshift-config-operator, watchrequestcount=57, upperbound=118, ratio=0.4830508474576271
Oct 13 10:40:36.877: INFO: operator=cloud-credential-operator, watchrequestcount=56, upperbound=86, ratio=0.6511627906976745
Oct 13 10:40:36.877: INFO: operator=cluster-node-tuning-operator, watchrequestcount=51, upperbound=72, ratio=0.7083333333333334
Oct 13 10:40:36.877: INFO: operator=dns-operator, watchrequestcount=51, upperbound=128, ratio=0.3984375
Oct 13 10:40:36.877: INFO: operator=kube-storage-version-migrator-operator, watchrequestcount=49, upperbound=74, ratio=0.6621621621621622
Oct 13 10:40:36.877: INFO: operator=cluster-samples-operator, watchrequestcount=33, upperbound=56, ratio=0.5892857142857143
Oct 13 10:40:36.877: INFO: operator=marketplace-operator, watchrequestcount=24, upperbound=32, ratio=0.75
[AfterEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:141
fail [github.com/openshift/origin/test/extended/apiserver/api_requests.go:437]: Expected
    <bool>: true
not to be true

Stderr
_sig-arch__Late__clients_should_not_use_APIs_that_are_removed_in_upcoming_releases__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.5s

Failed:
flake: api cronjobs.v1beta1.batch, removed in release 1.25, was accessed 185 times
api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 58137 times
api horizontalpodautoscalers.v2beta1.autoscaling, removed in release 1.25, was accessed 182 times
api poddisruptionbudgets.v1beta1.policy, removed in release 1.25, was accessed 181 times
api podsecuritypolicies.v1beta1.policy, removed in release 1.25, was accessed 1080 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 38080 times
user/e2e-test-resolve-local-names-z95sf-user accessed cronjobs.v1beta1.batch 2 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 3 times
user/system:admin accessed podsecuritypolicies.v1beta1.policy 894 times
user/system:admin accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 2 times
user/system:apiserver accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 52940 times
user/system:apiserver accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 36952 times
user/system:kube-controller-manager accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 182 times
user/system:kube-controller-manager accessed podsecuritypolicies.v1beta1.policy 175 times
user/system:kube-controller-manager accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 181 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 4466 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 406 times
user/system:serviceaccount:openshift-insights:gather accessed podsecuritypolicies.v1beta1.policy 11 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed cronjobs.v1beta1.batch 183 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta1.autoscaling 182 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed poddisruptionbudgets.v1beta1.policy 181 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 546 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 539 times

Stdout
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/framework.go:1453
[BeforeEach] [Top Level]
  github.com/openshift/origin/test/extended/util/test.go:61
[BeforeEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:142
STEP: Creating a kubernetes client
[It] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/apiserver/api_requests.go:27
Oct 13 10:40:37.192: INFO: api cronjobs.v1beta1.batch, removed in release 1.25, was accessed 185 times
Oct 13 10:40:37.192: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 58137 times
Oct 13 10:40:37.192: INFO: api horizontalpodautoscalers.v2beta1.autoscaling, removed in release 1.25, was accessed 182 times
Oct 13 10:40:37.192: INFO: api poddisruptionbudgets.v1beta1.policy, removed in release 1.25, was accessed 181 times
Oct 13 10:40:37.192: INFO: api podsecuritypolicies.v1beta1.policy, removed in release 1.25, was accessed 1080 times
Oct 13 10:40:37.192: INFO: api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 38080 times
Oct 13 10:40:37.192: INFO: user/system:admin accessed podsecuritypolicies.v1beta1.policy 894 times
Oct 13 10:40:37.192: INFO: user/system:admin accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 2 times
Oct 13 10:40:37.192: INFO: user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 3 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-insights:gather accessed podsecuritypolicies.v1beta1.policy 11 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed cronjobs.v1beta1.batch 183 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta1.autoscaling 182 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed poddisruptionbudgets.v1beta1.policy 181 times
Oct 13 10:40:37.192: INFO: user/e2e-test-resolve-local-names-z95sf-user accessed cronjobs.v1beta1.batch 2 times
Oct 13 10:40:37.192: INFO: user/system:apiserver accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 52940 times
Oct 13 10:40:37.192: INFO: user/system:apiserver accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 36952 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 546 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 539 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 4466 times
Oct 13 10:40:37.192: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 406 times
Oct 13 10:40:37.192: INFO: user/system:kube-controller-manager accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 182 times
Oct 13 10:40:37.192: INFO: user/system:kube-controller-manager accessed podsecuritypolicies.v1beta1.policy 175 times
Oct 13 10:40:37.192: INFO: user/system:kube-controller-manager accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 181 times
Oct 13 10:40:37.192: INFO: api cronjobs.v1beta1.batch, removed in release 1.25, was accessed 185 times
api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 58137 times
api horizontalpodautoscalers.v2beta1.autoscaling, removed in release 1.25, was accessed 182 times
api poddisruptionbudgets.v1beta1.policy, removed in release 1.25, was accessed 181 times
api podsecuritypolicies.v1beta1.policy, removed in release 1.25, was accessed 1080 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 38080 times
user/e2e-test-resolve-local-names-z95sf-user accessed cronjobs.v1beta1.batch 2 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 3 times
user/system:admin accessed podsecuritypolicies.v1beta1.policy 894 times
user/system:admin accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 2 times
user/system:apiserver accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 52940 times
user/system:apiserver accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 36952 times
user/system:kube-controller-manager accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 182 times
user/system:kube-controller-manager accessed podsecuritypolicies.v1beta1.policy 175 times
user/system:kube-controller-manager accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 181 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 4466 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 406 times
user/system:serviceaccount:openshift-insights:gather accessed podsecuritypolicies.v1beta1.policy 11 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed cronjobs.v1beta1.batch 183 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta1.autoscaling 182 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed poddisruptionbudgets.v1beta1.policy 181 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 546 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 539 times
Oct 13 10:40:37.192: INFO: api cronjobs.v1beta1.batch, removed in release 1.25, was accessed 185 times
api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 58137 times
api horizontalpodautoscalers.v2beta1.autoscaling, removed in release 1.25, was accessed 182 times
api poddisruptionbudgets.v1beta1.policy, removed in release 1.25, was accessed 181 times
api podsecuritypolicies.v1beta1.policy, removed in release 1.25, was accessed 1080 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 38080 times
user/e2e-test-resolve-local-names-z95sf-user accessed cronjobs.v1beta1.batch 2 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 3 times
user/system:admin accessed podsecuritypolicies.v1beta1.policy 894 times
user/system:admin accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 2 times
user/system:apiserver accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 52940 times
user/system:apiserver accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 36952 times
user/system:kube-controller-manager accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 182 times
user/system:kube-controller-manager accessed podsecuritypolicies.v1beta1.policy 175 times
user/system:kube-controller-manager accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 181 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 4466 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 406 times
user/system:serviceaccount:openshift-insights:gather accessed podsecuritypolicies.v1beta1.policy 11 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed cronjobs.v1beta1.batch 183 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta1.autoscaling 182 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed poddisruptionbudgets.v1beta1.policy 181 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 546 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 539 times
[AfterEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:140
[AfterEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:141
flake: api cronjobs.v1beta1.batch, removed in release 1.25, was accessed 185 times
api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 58137 times
api horizontalpodautoscalers.v2beta1.autoscaling, removed in release 1.25, was accessed 182 times
api poddisruptionbudgets.v1beta1.policy, removed in release 1.25, was accessed 181 times
api podsecuritypolicies.v1beta1.policy, removed in release 1.25, was accessed 1080 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 38080 times
user/e2e-test-resolve-local-names-z95sf-user accessed cronjobs.v1beta1.batch 2 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 3 times
user/system:admin accessed podsecuritypolicies.v1beta1.policy 894 times
user/system:admin accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 2 times
user/system:apiserver accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 52940 times
user/system:apiserver accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 36952 times
user/system:kube-controller-manager accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 182 times
user/system:kube-controller-manager accessed podsecuritypolicies.v1beta1.policy 175 times
user/system:kube-controller-manager accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 181 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 4466 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 406 times
user/system:serviceaccount:openshift-insights:gather accessed podsecuritypolicies.v1beta1.policy 11 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed cronjobs.v1beta1.batch 183 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta1.autoscaling 182 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed poddisruptionbudgets.v1beta1.policy 181 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 546 times
user/system:serviceaccount:openshift-oauth-apiserver:oauth-apiserver-sa accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 539 times

Stderr
_sig-arch__Late__clients_should_not_use_APIs_that_are_removed_in_upcoming_releases__Suite_openshift/conformance/parallel_
no-testclass
Time Taken: 3.5s